id
string
text
string
len_category
string
source
string
2101.01160
computational physics, coarse-graining, fluid mechanics Alexander J. Wagner # Molecular dynamics lattice gas equilibrium distribution function for Lennard-Jones particles Aleksandra Pachalieva1,2 and Alexander J. Wagner3 1Center for Nonlinear Studies, Los Alamos National Laboratory, Los Alamos, NM 87545, USA 2Department of Mechanical Engineering, Technical University of Munich, 85748 Garching, Germany 3Department of Physics, North Dakota State University, Fargo, ND 58108, USA [email protected] ###### Abstract The molecular dynamics lattice gas method maps a molecular dynamics simulation onto a lattice gas using a coarse-graining procedure. This is a novel fundamental approach to derive the lattice Boltzmann method by taking a Boltzmann average over the molecular dynamics lattice gas. A key property of the lattice Boltzmann method is the equilibrium distribution function, which was originally derived by assuming that the particle displacements in the molecular dynamics simulation are Boltzmann distributed. However, we recently discovered that a single Gaussian distribution function is not sufficient to describe the particle displacements in a broad transition regime between free particles and particles undergoing many collisions in one time step. In a recent publication, we proposed a Poisson weighted sum of Gaussians which shows better agreement with the molecular dynamics data. We derive a lattice Boltzmann equilibrium distribution function from the Poisson weighted sum of Gaussians model and compare it to a measured equilibrium distribution function from molecular dynamics data and to an analytical approximation of the equilibrium distribution function from a single Gaussian probability distribution function. ###### keywords: molecular dynamics, lattice gas method, lattice Boltzmann method, coarse- graining ## 1 Introduction The molecular dynamics lattice gas (MDLG) method [15, 16] uses a coarse- graining procedure to establish a direct link between microscopic methods – in particular, molecular dynamics (MD) simulation, and mesoscale methods such as lattice gas (LG) [6, 3], and lattice Boltzmann methods (LBM) [19, 8]. The MDLG fully relies on MD data and as such it rigorously recovers the hydrodynamics of the underlying physical system, and can be used to verify the behavior and examine the properties of the LG or the LBM methods directly without using the standard kinetic theory approach. Aspects that can be examined include fluctuating [10, 2, 5, 21], thermal [7, 11], multi-phase and multi component systems [12, 20, 8, 4]. A key feature in the LBM is the equilibrium distribution function. The LBM equilibrium distribution was originally derived by analogy to the continuous Boltzmann equation, where the equilibrium distribution for the velocities is a Maxwell Boltzmann distribution. Similarly, the LBM moments of the discrete velocity distribution were matched, to the degree possible, with the velocity moments of the Maxwell Boltzmann distribution. In the alternative derivation of the LBM from MD, it was shown that these previously postulated equilibrium distributions are indeed, at least approximately, consistent with the MDLG approach for specific discretization combinations for lattice and time spacing. In the original MDLG calculation of the equilibrium distribution by Parsa et al. [15], it was assumed that the particle displacements in the molecular dynamics simulation are also Boltzmann distributed. This assumption gave an adequate prediction of the global equilibrium distribution function of the lattice Boltzmann method. However, later on by examining more carefully the equilibrium system, we noticed small deviations (up to 5%) between the analytically predicted and the measured equilibrium distribution functions. These deviations were traced back to the prediction of the one-particle displacement distribution function. In Pachalieva et al. [13], we proposed a correction of the displacement distribution function, which shows that a dilute gas with area fraction of $\phi=0.0784$ and temperature of 20 LJ is better approximated by a Poisson weighted sum of Gaussians (WSG) probability distribution function. This probability distribution function takes into account that after a time step $\Delta t$ the particles can be divided into groups depending on the number of collisions they have experienced. In principle, the timing of the collisions should be random (given by a Poisson process), however, the resulting integrals over the collision times do not allow for an analytical solution. Thus, we assume that the particle collisions are evenly spaced, which may introduce a small error but it makes the resulting displacements again Gaussian distributed. For details, please refer to [13]. The Poisson weighted sum of Gaussians probability distribution function also delivers better results for a purely ballistic and purely diffusive regimes (for very small or very large time steps respectively), where the Poisson WSG formulation is reduced to a single Gaussian. In the current publication, we show that the original premise of the paper [13] does indeed hold. We derive the MDLG equilibrium distribution function from the Poisson WSG one-particle displacement function and show that it compares favourably to a measured equilibrium distribution function from molecular dynamics (MD) simulation, whereas the single Gaussian equilibrium distribution function is a much poorer prediction. Our findings show that the Poisson WSG approximates the measured equilibrium distribution function significantly better. The rest of the paper is summarized as follows: We briefly describe the MDLG analysis method in Section 2. In Section 3, we derive the equilibrium distribution function from one-particle displacement function. In Section 3 (a), we show how to derive the equilibrium distribution function when the distribution is given by a single Gaussian and in Section 3 (b) when the displacements are instead distributed according to a Poisson WSG one-particle displacement function. In Section 4, we give a detailed description of the MD simulation setup used to obtain the MD data. The MD trajectories are later used to validate the theoretical solutions of the equilibrium distribution function. In Section 5, we compare the equilibrium distribution function obtained on one hand from theory, using either a single Gaussian or the novel Poisson weighted sum of Gaussians probability distribution function, and on the other hand, measured from MD data. Our analysis shows significant improvement of the equilibrium distribution function analytical prediction when the Poisson WSG model is used. Finally, in Section 6, we give a brief conclusion and suggestions for future work . ## 2 Molecular dynamics lattice gas method In the MDLG analysis, we impose a lattice onto an MD simulation of Lennard- Jones particles and track the migration of the particles from one lattice position to another with displacement $v_{i}$ after a time step $\Delta t$ as shown in Fig. 1a. A schematic representation of the lattice is given in Fig. 1b where the numbers 0 to 49 represent the $i$ index of the occupation number of an D2Q49 velocity set. We run molecular dynamics simulations and analyze the particles’ trajectories to obtain MDLG occupation numbers defined as (a) (b) Figure 1: (Color online) (a) Sketch of the MDLG analysis. A lattice is superimposed onto the MD simulation domain. The movement of the particles is tract from the central node using their MD trajectories. The green circles represent the position of the particles at time $t-\Delta t$ and the red circles are their respective positions at time $t$. Using the particle trajectories and the imposed lattice, the occupation number $n_{i}$ is defined as given in Eq. (1). The black arrows are the lattice velocities. Only the lattice velocities which have at least one particle within their area (i.e. non-zero occupation number) are shown. (b) Schematic representation of the D2Q49 lattice with the numbering convention for the lattice velocities in two dimensions. The central point 0 corresponds to the zeroth-velocity $v_{0}=(0,0)$ and the rest of the velocities are given as a vector connecting the central point and the lattice point in question as shown in (a). The velocities are color coded depending on their length. $n_{i}(x,t)=\sum_{j}\Delta_{x}[x_{j}(t)]\Delta_{x-v_{i}}[x_{j}(t-\Delta t)],$ (1) with the delta function $\Delta_{x}[x_{j}(t)]=1$, if particle $x$ is in the lattice cell at time $t$, and $\Delta_{x}[x_{j}(t)]=0$, otherwise. Here, the $x_{j}(t)$ is the position of the $j$-th particle at time $t$ and $v_{i}$ is the particle displacement, which in the MDLG description is strongly correlated to the lattice velocities. We can now cast the evolution of the occupation numbers $n_{i}$ in the form of a lattice gas evolution equation as $n_{i}(x+v_{i},t+\Delta t)=n_{i}(x,t)+\Xi_{i},$ (2) by defining the lattice gas collision operator $\Xi_{i}$ in terms of the occupation numbers as $\Xi_{i}=n_{i}(x+v_{i},t+\Delta t)-n_{i}(x,t).$ (3) The molecular dynamics lattice Boltzmann (MDLB) distribution function is defined as a Boltzmann ensemble average of the MDLG occupation numbers $n_{i}$ and it is given by $f_{i}=\langle n_{i}\rangle_{\mathrm{neq}}.$ (4) By taking the non-equilibrium ensemble average of Eq. (2), we obtain the MDLB evolution equation $f_{i}(x+v_{i},t+\Delta t)=f_{i}(x,t)+\Omega_{i},\qquad\text{with}\quad\Omega_{i}=\langle\Xi_{i}\rangle_{\mathrm{neq}},$ (5) where $\Omega_{i}$ is the MDLB collision operator. A key element of the LBM is the global equilibrium distribution function, which in MDLB context is defined as an average of the lattice gas densities $n_{i}$ over the whole MD domain and all iterations of an equilibrium MD simulation. The MDLB equilibrium distribution function is given by $\begin{split}f_{i}^{\mathrm{eq}}&=\langle n_{i}\rangle_{\mathrm{eq}}\\\ &=\left\langle\sum_{j}\Delta_{x}[x_{j}(t)]\Delta_{x-v_{i}}[x_{j}(t-\Delta t)]\right\rangle_{\mathrm{eq}}\\\ &=M\int dx_{1}\int d\delta x_{1}\;P^{(1),\mathrm{eq}}(x_{1},\delta x_{1})\Delta_{x}[x_{1}]\Delta_{x-v_{i}}[x_{1}-\delta x_{1}],\end{split}$ (6) where $M$ is the total number of particles and $P^{(1),\mathrm{eq}}$ is the one-particle displacement distribution function in equilibrium. This allows us to obtain the equilibrium distribution function $f_{i}^{\mathrm{eq}}$ analytically from the one-particle displacements Probability Distribution Function (PDF). ## 3 Derivation of the MDLB equilibrium distribution function In the MDLB formulation, the equilibrium distribution function depends solely on the one-particle displacement distribution function. Thus, knowing $P^{(1),\mathrm{eq}}$ is crucial for predicting the equilibrium distribution function. In the following subsections, we derive the equilibrium distribution function from (a) a single Gaussian probability distribution function and (b) from a Poisson weighted sum of Gaussians probability distribution function. ### 3.1 Single Gaussian distribution model In Parsa et al. [15] a good approximation of the MDLB equilibrium distribution function is given by a single Gussian in one-dimension ($d=1$) $\begin{split}P^{\mathrm{G}}_{\alpha}(\delta x)=\frac{1}{[2\pi\langle(\delta x_{\alpha})^{2}\rangle]^{d/2}}\exp\left[-\frac{(\delta x_{\alpha}-u_{\alpha}\Delta t)^{2}}{2\langle(\delta x_{\alpha})^{2}\rangle}\right],\end{split}$ (7) with displacements $\delta x_{\alpha}$, second order moment $\langle(\delta x_{\alpha})^{2}\rangle$ and mean velocity $u_{\alpha}$. The solution factorizes for higher dimensions and it is given by $P^{\mathrm{G}}(\delta x)=\prod_{\alpha=1}^{d}P^{\mathrm{G}}_{\alpha}(\delta x).$ (8) Following Eq. (6) the equilibrium distribution function can be expressed as $\frac{f^{\mathrm{eq,G}}_{i}}{\rho^{\mathrm{eq}}}=\prod_{\alpha=1}^{d}f_{i,\alpha}^{\mathrm{eq,G}},$ (9) with $\rho^{\mathrm{eq}}$ being the mass density. The equilibrium distribution function $f_{i,\alpha}^{\mathrm{eq,G}}$ in one-dimension is given by $\begin{split}f_{i,\alpha}^{\mathrm{eq,G}}&=N\left(e^{-\frac{(u_{i,\alpha}-1)^{2}}{2a^{2}}}-2e^{-\frac{u_{i,\alpha}^{2}}{2a^{2}}}+e^{-\frac{(u_{i,\alpha}+1)^{2}}{2a^{2}}}\right)\\\ &+\frac{u_{i,\alpha}-1}{2}\left[\mathrm{erf}\left(\frac{u_{i,\alpha}-1}{a\sqrt{2}}\right)-\mathrm{erf}\left(\frac{u_{i,\alpha}}{a\sqrt{2}}\right)\right]\\\ &+\frac{u_{i,\alpha}+1}{2}\left[\mathrm{erf}\left(\frac{u_{i,\alpha}+1}{a\sqrt{2}}\right)-\mathrm{erf}\left(\frac{u_{i,\alpha}}{a\sqrt{2}}\right)\right],\end{split}$ (10) with $a^{2}=\frac{\langle(\delta x_{\alpha})^{2}\rangle}{(\Delta x)^{2}},\quad\qquad N=\frac{a}{\sqrt{2\pi}},\quad\qquad u_{i,\alpha}=v_{i,\alpha}-u_{\alpha},$ (11) where $\langle(\delta x_{\alpha})^{2}\rangle$ is the mean-squared displacement, $\Delta x$ is the lattice size, and $u_{\alpha}$ is the mean velocity. We have performed MD simulations with mean velocity set to zero, however, we could obtain results for different mean velocities $u_{\alpha}$ by applying a Galilean transformation. We have set the value of $a^{2}$ to approximately $1/6$ for which the MDLG results agree with the values of the D2Q9 lattice Boltzmann weights. For details regarding the derivation of the Gaussian equilibrium distribution function, please refer to [15]. Even though this formulation shows very good agreement with the measured equilibrium distribution function from MD simulations, under more careful investigation we found that the there are discrepancies of up to about 5% for certain parameter regimes. This means that the displacement distribution function cannot be fully captured by a single Gaussian and a more complex distribution function has to be applied. ### 3.2 Poisson weighted sum of Gaussians model In Pachalieva et al. [13], we have introduced a correction of the displacements PDF proposed by Parsa et al. [15] using a Poisson weighted sum of Gaussians (WSG) instead of a single Gaussian distribution function. The Poisson WSG is given by $\begin{split}P^{\mathrm{WSG}}(\delta x)=\sum_{c=0}^{\infty}e^{-\lambda}\frac{\lambda^{c}}{c!}P^{c}(\delta x),\end{split}$ (12) where the $P^{c}(\delta x)$ probability distribution function also factorizes for higher dimensions equivalently to the single Gaussian distribution function as given in Eq. (8). The one-dimensional Poisson weighted sum of Gaussians probability distribution function $P^{c}_{\mathrm{\alpha}}(\delta x)$ is then given by $P^{c}_{\mathrm{\alpha}}(\delta x)=\left[\frac{(\lambda+1)}{2\pi(c+1)\langle(\delta x_{\alpha})^{2}\rangle}\right]^{d/2}\exp\left[-\frac{(\lambda+1)(\delta x_{\alpha}-u_{\alpha}\Delta t)^{2}}{2(c+1)\langle(\delta x_{\alpha})^{2}\rangle}\right],$ (13) where $\delta x_{\alpha}$ is the displacement in one-dimension, $\langle(\delta x_{\alpha})^{2}\rangle$ is the second-order moment, $u_{\alpha}$ is the mean velocity, $c$ is the number of occurrences, and $\lambda$ is the average number of collisions. The fact that the new displacement distribution function is just a sum of Gaussians makes the calculation of the new MDLG equilibrium functions surprisingly simple. Thus, we obtain $f_{i}^{eq}=\sum_{c=0}^{\infty}e^{-\lambda}\frac{\lambda^{c}}{c!}f_{i}^{c,eq}.$ (14) The $f_{i}^{c,eq}$, similar to Eq. (9), is given by $\frac{f^{c,\mathrm{eq}}_{i}}{\rho^{\mathrm{eq}}}=\prod_{\alpha=1}^{d}f_{i,\alpha}^{c,\mathrm{eq}},$ (15) where $\rho^{\mathrm{eq}}$ is the mass density and $f^{c,\mathrm{eq}}_{i,\alpha}$ in one-dimension is given by $\begin{split}f^{c,\mathrm{eq}}_{i,\alpha}=&\left\\{\frac{N_{c}}{2\sqrt{\pi}}\left(e^{-\frac{(u_{i,\alpha}-1)^{2}}{N_{c}^{2}}}-2e^{-\frac{u_{i,\alpha}^{2}}{N_{c}^{2}}}+e^{-\frac{(u_{i,\alpha}+1)^{2}}{N_{c}^{2}}}\right)\right.\\\ &\left.+\frac{(u_{i,\alpha}-1)}{2}\left[\mathrm{erf}\left(\frac{(u_{i,\alpha}-1)}{N_{c}}\right)-\mathrm{erf}\left(\frac{u_{i,\alpha}}{N_{c}}\right)\right]\right.\\\ &\left.+\frac{(u_{i,\alpha}+1)}{2}\left[\mathrm{erf}\left(\frac{(u_{i,\alpha}+1)}{N_{c}}\right)-\mathrm{erf}\left(\frac{u_{i,\alpha}}{N_{c}}\right)\right]\right\\},\end{split}$ (16) with $N_{c}=\sqrt{\frac{2a^{2}(c+1)}{\lambda+1}}$ (17) where $a^{2}$ and $u_{i,\alpha}$ are defined in Eq. (11). The one-dimensional equilibrium distribution function given in Eq. (16) is similar to the single Gaussian equilibrium distribution function in Eq. (10), however, their weighting factors are not the same. The equilibrium distribution function Eq. (16) takes also into account the average number of collisions $\lambda$, which needs to be defined. One way to approximate the average number of collisions $\lambda$ is by using the velocity auto-correlation function. However, the auto-correlation function is just a theoretical approximation and is not exact. To eliminate the second- order and the fourth-order moment errors, we match these moments to the corresponding ones measured directly from the MD simulations. The second-order moment of the Poisson WSG one-particle distribution function can be derived from the second-order Gaussian integral $\begin{split}\mu_{2}&=\int_{-\infty}^{\infty}P^{\mathrm{WSG}}(\delta x)(\delta x)^{2}\,d\delta x\\\ &=\int_{-\infty}^{\infty}\sum_{c=0}^{\infty}e^{-\lambda}\frac{\lambda^{c}}{c!}\frac{\sqrt{\lambda+1}}{\sqrt{2\pi(c+1)\langle(\delta x)^{2}\rangle}}\exp\left(-\frac{(\lambda+1)(\delta x-u\Delta t)^{2}}{2(c+1)\langle(\delta x)^{2}\rangle}\right)(\delta x)^{2}\,d\delta x\\\ &=\langle(\delta x)^{2}\rangle.\end{split}$ (18) Analogous, we obtain the fourth-order moment from the fourth-order Gaussian integral $\begin{split}\mu_{4}&=\int_{-\infty}^{\infty}P^{\mathrm{WSG}}(\delta x)(\delta x)^{4}\,d(\delta x)\\\ &=\int_{-\infty}^{\infty}\sum_{c=0}^{\infty}e^{-\lambda}\frac{\lambda^{c}}{c!}\frac{\sqrt{\lambda+1}}{\sqrt{2\pi(c+1)\langle(\delta x)^{2}\rangle}}\exp\left(-\frac{(\lambda+1)(\delta x-u\Delta t)^{2}}{2(c+1)\langle(\delta x)^{2}\rangle}\right)(\delta x)^{4}\,d\delta x\\\ &=\frac{3\langle(\delta x)^{2}\rangle^{2}}{(\lambda+1)^{2}}\left[\lambda^{2}+3\lambda+1\right].\end{split}$ (19) By solving the quadratic equation for $\lambda$ $\frac{3\mu_{2}^{2}}{(\lambda+1)^{2}}\left[\lambda^{2}+3\lambda+1\right]-\mu_{4}=0$ (20) we find the following solutions $\lambda_{1,2}=\frac{-9\mu_{2}^{2}\pm\sqrt{3[15\mu_{2}^{4}-4\mu_{2}^{2}\mu_{4}]}+2\mu_{4}}{2[3\mu_{2}^{2}-\mu_{4}]}.$ (21) where $\mu_{2}=\langle(\delta x)^{2}\rangle$ and $\mu_{4}$ are the second- and fourth-order displacement moments, respectively. We use the moments measured from MD simulations, which ensures that the Poisson weighted sum of Gaussians model has the same $\mu_{2}$ and $\mu_{4}$ moments. In Pachalieva et al. [13], we show that $\lambda_{2}$ provides an optimal solution, which we use to derive the Poisson WSG equilibrium distribution function. For detailed derivation and discussion of the Poisson WSG displacement distribution function, please refer to Pachalieva et al. [13]. (a) (b) Figure 2: (Color online) (a) Displacements probability distribution functions. The symbols (red) depict a PDF obtained from an MD simulation of LJ particles in equilibrium. The line (black) illustrates a Gaussian probability distribution function defined in Eq. (7) with mean-squared displacement fitted directly to the MD data. The dashed line (blue) represents the Poisson WSG obtained from Eq. (12). Only the data for positive velocities has been depicted due to symmetry. (b) shows the difference between the distributions per interval $X_{i}$ as defined in Eq. (22). The presented data is for the standard parameters used in the paper and a coarse-grained time step $\Delta t=3.2$. Meaningfully comparing two probability distribution functions is a non-trivial task since often there are significant deviations in the tails of the distribution that would show up in a simpler measure like dividing the distributions. However, since the tails carry little weight, these deviations are not relevant for the system. In Pachalieva et al. [13], we used the Kullback-Leibler (KL) divergence, a tool commonly used in machine learning. The element-wise definition of this function is given by $K(X_{i})=K(R\parallel Q)=R(X_{i})\log\left({\frac{R(X_{i})}{Q(X_{i})}}\right),$ (22) where $R(X_{i})$ and $Q(X_{i})$ are probability distributions over an interval $X_{i}$. By performing a sum over all the bins $X_{i}$, we obtain the Kullback-Leibler (KL) divergence [9] defined as $D_{\text{KL}}(R\parallel Q)=\sum_{i}R(X_{i})\log\left({\frac{R(X_{i})}{Q(X_{i})}}\right).$ (23) The KL divergence measures the discrepancies of one probability distribution function to another. It is always non-negative $D_{\text{KL}}(R\parallel Q)\geq 0$ or equal to zero if and only if the probability distribution functions are identical $R(X_{i})=Q(X_{i})$. In Fig. 2a, we see the true probability distribution function obtained from the MD data $P^{\mathrm{MD}}(X_{i})$, the Gaussian probability distribution function $P^{\mathrm{G}}(X_{i})$, and the Poisson WSG distribution function $P^{\mathrm{WSG}}(X_{i})$. There is a visible divergence between the Gaussian and the other two distribution functions. We measured the element-wise Kullback-Leibler divergence $K(X_{i})$, as defined in Eq. (22), for $P^{\mathrm{G}}(X_{i})$ compared to the MD data and the Poisson WSG distribution function as shown in Fig. 2b. The results suggest that even though the Gaussian and the Poisson WSG probability distribution functions have the same second moment, their deviations in the fourth- and higher-order moments influence strongly the form of the distribution function. In Section 5, we show how these deviations effect the LBM equilibrium distribution function. ## 4 Simulations setup All measured data, from probability distribution functions of the displacements $P^{\mathrm{MD}}(X_{i})$ to the equilibrium distribution function $f_{i}^{\mathrm{eq,MD}}$ depicted in Figs. 3\- 5, are obtained from molecular dynamics simulations. To perform the MD simulations we used the open-source molecular dynamics framework LAMMPS [17, 1] developed by Sandia National Laboratories. The LAMMPS package uses Velocity-Verlet integration scheme. The MD simulations consist of particles interacting with the standard 6-12 Lennard-Jones (LJ) intermolecular potential given by $V_{LJ}=4\varepsilon\left[\left(\frac{\sigma}{r}\right)^{12}-\left(\frac{\sigma}{r}\right)^{6}\right],$ (24) with $\sigma$ being the distance at which the inter-particle potential goes to zero, $r$ is the distance between two particles, and $\varepsilon$ is the potential well depth. The particle mass and the LJ particle diameter are set to $m=1$ and $\sigma=1$, respectively. The LJ timescale is given by the time needed for a particle with kinetic energy of half the potential energy well $\varepsilon$ to traverse one diameter $\sigma$ of an LJ particle. This can be also expressed as $\tau_{\mathrm{LJ}}=\sqrt{\frac{m\sigma^{2}}{\varepsilon}}.$ (25) The thermal time scale corresponds to the time it takes a particle with the kinetic energy of $1/2\;k_{B}T$ to transverse the diameter $\sigma$ of a LJ particle, which is given by $\tau_{\mathrm{th}}=\sqrt{\frac{m\sigma^{2}}{k_{B}T}}.$ (26) We executed molecular dynamics simulations with temperature of $20$ in the LJ units defined above. This corresponds to a thermal time scale smaller than the LJ time scale $\tau_{\mathrm{LJ}}$ by factor of $1/\sqrt{20}\approx 0.22$. The number of particles in each simulation has been fixed to $N=99\ 856$ which fills a two-dimensional (2D) square with length L = 1000$\sigma$. The area fraction $\phi$ of the domain is calculated from the area of the circular LJ particles multiplied by the number of particles divided by the area of the domain, where the diameter of the circular LJ particle is given by $\sigma$. The MD simulations considered in this publication have an area fraction of $\phi=0.078387$. We initialised the simulations using homogeneously distributed particles with kinetic energy corresponding to temperature equal to 20 in LJ units. This corresponds to a dilute gas with high temperature. The temperature is way above the critical temperature for liquid-gas coexistence of $T_{c}=1.3120(7)$, and the density is way below the critical density $\rho_{c}=0.316(1)$ [18]. We focus our attention to MD simulations of a fairly dilute gas in equilibrium, since the assumption that the collision times is Poisson distributed is correct only for dilute systems. Table 1: Initialization parameters of the molecular dynamics simulations performed using LAMMPS framework. For all MD simulations the MD step size is fixed to $0.0001\tau_{\mathrm{LJ}}$ and the number of coarse-grained iterations is $2\,000$. | | | MD output | Total MD ---|---|---|---|--- $\Delta t$ | $\Delta x$ | $lx$ | frequency | time | | | ($1/\tau_{\mathrm{LJ}}$) | ($\tau_{\mathrm{LJ}}$) 0.3911 | 4 | 250 | 3 911 | 782.2 0.5000 | 5 | 200 | 5 000 | 1 000.0 0.5626 | 5.5 | 180 | 5 626 | 1 125.2 0.6927 | 6.6(6) | 150 | 6 927 | 1 385.4 0.9009 | 8.3(3) | 120 | 9 009 | 1 801.8 1.1261 | 10 | 100 | 11 261 | 2 252.2 1.4994 | 12.5 | 80 | 14 994 | 2 998.8 1.6342 | 13.3(3) | 75 | 16 342 | 3 268.4 2.0338 | 15.625 | 64 | 20 338 | 4 067.6 2.9280 | 20 | 50 | 29 280 | 5 856.0 4.1821 | 25 | 40 | 41 821 | 8 364.2 6.1751 | 31.25 | 32 | 61 751 | 12 350.2 Since the MD simulations correspond to a dilute high temperature gas, the particle velocities will be also larger than for a typical molecular dynamics simulation. Thus, we set the MD step size is to $0.0001\,\tau_{\mathrm{LJ}}$ which is considerably small to ensure high accuracy of the MD data. We define a dimensionless coarse-grained time step $\Delta t$ being the product of the MD step size and the MD output frequency shown in Table LABEL:tab:simulation_setup. The time step $\Delta t$ is chosen such that the MD simulations are restricted to the ratio of the mean-squared displacement and the squared lattice size being set to $a^{2}=\frac{\langle(\delta x)^{2}\rangle}{(\Delta x)^{2}}\approx 0.1611,$ (27) this corresponds to the parameter $a^{2}$ given in Eq. (11), which has been also used in earlier publications [15, 14]. By fixing the value, we ensure that most of the LJ particles in equilibrium will travel up to one lattice space which corresponds to an D2Q9 lattice Boltzmann method. To verify that the Poisson WSG equilibrium distribution function $f_{i}^{\mathrm{eq,WSG}}$ approximated the MD data better than the single Gaussian equilibrium distribution function $f_{i}^{\mathrm{eq,G}}$ across the length scale, from ballistic to diffusive regime, we vary the coarse-grained time step $\Delta t\in[0.3911,6.1751]$ and the lattice size $\Delta x\in[4,31.25]$ of the executed simulations. An overview of the MD simulation setup is given in Table LABEL:tab:simulation_setup. The number of lattice points $lx$ varies from 250 to 32 depending on the coarse-grained time step $\Delta t$. For each coarse- grained time step $\Delta t$ we performed $2\,000$ iterations which corresponds to total MD time of $782.2\,\tau_{\mathrm{LJ}}$ to $12\,350.2\,\tau_{\mathrm{LJ}}$ for the smallest and largest coarse-grained time step $\Delta t$, respectively. In order to bring the molecular dynamics simulations to equilibrium state before we start collecting data, the initial 3 000 000 iterations of each simulation were discarded. The discarded iterations are not included in Table LABEL:tab:simulation_setup for clarity. The MD simulation setup characterizes a hot dilute gas in equilibrium with average velocity $u_{\alpha}$ fixed to zero $Nu_{\alpha}=\sum_{j=1}^{N}v_{j,\alpha}=0,$ (28) where N is the number of LJ particles. We performed standard molecular dynamics simulations without thermostat. In the LAMMPS framework this is called NVE integration. The microcanonical ensamble NVE is characterized by constant number of particles (N), constant volume (V) and constant energy (E). ## 5 Results In order to obtain a measured equilibrium distribution function, we post- process the collected MD data using the MDLG analysis tool. The MD domain is overlapped with a lattice and we trace the migration of the particles over time from one lattice to another. By doing this, we obtain the MDLG occupation numbers $n_{i}(x,t)$ as defined in Eq. (1) which after sufficient averaging deliver the MDLB equilibrium distribution function $f_{i}^{\mathrm{eq,MD}}$as defined in Eq (6). The analytical models of the equilibrium distribution function defined in Section 3 depend only on the choice of the one-particle displacement distribution function. Since we define two different one-particle distribution, we expect to see also changes in the respective equilibrium distribution function derived from them, even though their second-order moments are equivalent. However, a non-trivial question remains how the migration of particles from one node to another changes within a lattice. (a) (b) Figure 3: (Color online) (a) Estimated equilibrium distribution functions $f_{i}^{\mathrm{eq,*}}$ obtained either from MD simulation data ($f_{i}^{\mathrm{eq,MD}}$) depicted with symbols, theoretical solution using a single Gaussian distribution function ($f_{i}^{\mathrm{eq,G}}$) depicted with dotted lines or theoretical solution using Poisson WSG ($f_{i}^{\mathrm{eq,WSG}}$) depicted with dashed lines. (b) Our numbering for the velocities in a D2Q25 lattice. The equilibrium distribution function $f_{i}^{\mathrm{eq,*}}$ values are color coded and each color represents one of the six sets of equilibrium distribution function contributions. Here, the asterisk (∗) corresponds to the variety of methods used to obtain an equilibrium distribution function: measured from MD simulation, single Gaussian analytical solution and Poisson WSG analytical solution. Note that by using a simple-minded direct comparison on a log-scale (rather than a Kulbeck- Leibler measure) practically irrelevant errors for very small occupation numbers stand out here. To gain a better understanding, we calculate the equilibrium distribution function for an extended D2Q25 lattice which corresponds to two neighboring cells in $X$\- and $Y$-directions for a two-dimensional domain. A schematic representation of the D2Q25 lattice is given in Fig. 3b. In equilibrium state with zero initial velocity, one distinguishes six sets of equilibrium distribution function contributions: $f_{0}^{\mathrm{eq,*}},f_{1-4}^{\mathrm{eq,*}},f_{5-8}^{\mathrm{eq,*}},f_{9-12}^{\mathrm{eq,*}},f_{13-20}^{\mathrm{eq,*}},$ and $f_{21-24}^{\mathrm{eq,*}},$ where each set has a unique displacement length from the central lattice. When measuring the equilibrium distribution function $f_{i}^{\mathrm{eq,MD}}$ from the MD simulations, we average over the number of lattices for each set to obtain a symmetric probability distribution function. It is worth mentioning that the deviations of the $f_{i}^{\mathrm{eq,MD}}$ values within each set are relatively small. The MDLG analysis was introduced for an D2Q49 lattice including a third layer of neighbouring cells, however, the number of considered neighboring layers depends solely on the problem at hand. For a simulation in equilibrium with zero velocity, and a parameter $a^{2}$ as defined in Eq. (27) being set to approximately $0.1611$, we obtain an equilibrium distribution function which is symmetric and has significant contributions up to D2Q25 lattice nodes. The estimated equilibrium distribution function $f_{i}^{\mathrm{eq,*}}$ for a variety of coarse-grained time steps $\Delta t\in[0.3911,6.1751]$ is shown in Fig. 3a. The equilibrium distribution function $f_{i}^{\mathrm{eq,*}}$, as mentioned above, is obtained from three different methods: $f_{i}^{\mathrm{eq,MD}}$ is measured from an MD simulation, $f_{i}^{\mathrm{eq,G}}$ is theoretically estimated using a single Gaussian distribution function and $f_{i}^{\mathrm{eq,WSG}}$ is theoretically estimated from a Poisson WSG distribution function. The theoretical equilibrium distribution function models are described in detail in Sections 3 3.1 and 3 3.2, respectively. In Fig. 3a one can see that the largest equilibrium distribution function contributions are coming from the first layer neighbours $f_{0-8}^{\mathrm{eq,*}}$. These nodes are approximated very well by both theoretical models, please refer to Fig. 4a for a detailed comparison of the measured and the theoretical $f_{0-8}^{\mathrm{eq,*}}$. The next equilibrium distribution function groups $f_{9-12}^{\mathrm{eq,*}}$ and $f_{13-20}^{\mathrm{eq,*}}$ are significantly smaller than $f_{0-8}^{\mathrm{eq,*}}$ with one to two order of magnitude. For $f_{9-12}^{\mathrm{eq,*}}$ and $f_{13-20}^{\mathrm{eq,*}}$, we see that the deviations of the measured and the theoretical single Gaussian model become larger. The Poisson WSG $f_{9-20}^{\mathrm{eq,*}}$ show a very good agreement with the measured equilibrium distribution function. The diagonal nodes in the second layer $f_{21-25}^{\mathrm{eq,*}}$ are even smaller and their value could be considered negligible. However, the measured equilibrium distribution function $f_{i}^{\mathrm{eq,MD}}$ shows a good agreement with the theoretical Poisson WSG $f_{i}^{\mathrm{eq,WSG}}$ even for very small contributions such as $f_{21-25}^{\mathrm{eq,*}}$. This suggests that these contributions even though really small are not just noise but theoretically justified. (a) (b) Figure 4: (Color online) (a) First layer equilibrium distribution functions $f_{0-8}^{\mathrm{eq,*}}$ scaled to the Gaussian equilibrium distribution function. The equilibrium distribution functions are obtained either from MD simulation data ($f_{0-8}^{\mathrm{eq,MD}}$), theoretical solution using a single Gaussian distribution function ($f_{0-8}^{\mathrm{eq,G}}$) or theoretical solution using Poisson WSG ($f_{0-8}^{\mathrm{eq,WSG}}$). (b) Schematic representation of the D2Q25 lattice. The equilibrium distribution function $f_{i}^{\mathrm{eq,*}}$ values are color coded and each color represents one of the six sets of equilibrium distribution function contributions. Here, the asterisk (∗) corresponds to the variety of methods used to obtain an equilibrium distribution function: measured from MD simulation, single Gaussian analytical solution and Poisson WSG analytical solution. Figures 4a and 5a depict the equilibrium distribution functions scaled to the single Gaussian equilibrium function. They show how the measured from MD simulation and the novel Poisson WSG equilibrium distribution functions deviate from the single Gaussian. The first layer equilibrium distribution function values are shown in Fig. 4a. These nodes have the largest contribution to the total equilibrium distribution function. Fig. 4a shows more particles staying at node zero and a depression for the first neighbouring layer (nodes 1 to 8). This very same feature repeats itself in Fig. 2b. The $\mathrm{P}_{\lambda 2}^{\mathrm{WSG}}\mathrm{log}(\mathrm{P}_{\lambda 2}^{\mathrm{WSG}}/\mathrm{P}^{\mathrm{G}})$ values depicted in blue, show that the number of small displacements is enhanced $X_{i}/\Delta x\in[0,0.3]$ while the number of $X_{i}/\Delta x\in[0.3,1.0]$ displacements is suppressed. (a) (b) Figure 5: (Color online) (a) Second layer equilibrium distribution functions $f_{9-24}^{\mathrm{eq,*}}$ scaled to the Gaussian equilibrium distribution function. The equilibrium distribution functions are obtained either from MD simulation data ($f_{9-24}^{\mathrm{eq,MD}}$), theoretical solution using a single Gaussian distribution function ($f_{9-24}^{\mathrm{eq,G}}$) or theoretical solution using Poisson WSG ($f_{9-24}^{\mathrm{eq,WSG}}$). (b) Schematic representation of the D2Q25 lattice. The equilibrium distribution function $f_{i}^{\mathrm{eq,*}}$ values are color coded and each color represents one of the six sets of equilibrium distribution function contributions. Here, the asterisk (∗) corresponds to the variety of methods used to obtain an equilibrium distribution function: measured from MD simulation, single Gaussian analytical solution and Poisson WSG analytical solution. The second layer equilibrium distribution function values are depicted in Fig. 5a. As one can see in Fig. 2b, there is an enhanced probability of large displacements $X_{i}/\Delta x\in[0.9,1.6]$ which corresponds to the larger values of $f_{9-24}^{\mathrm{eq,WSG}}$ in Fig. 5a. The deviations (up to approx. 4.5%) from the theoretical single Gaussian equilibrium distribution function are also larger compared to the first layer nodes $f_{0-8}^{\mathrm{eq,WSG}}$. Since the $f_{9-24}^{\mathrm{eq,WSG}}$ true values are smaller by multiple orders of magnitude than the first layer neighbours $f_{0-8}^{\mathrm{eq,WSG}}$ these deviations are almost irrelevant for the total equilibrium distribution function, even though they are larger. Nevertheless, Fig. 5a shows clearly that the Poisson WSG equilibrium distribution function captures the MD data more precisely. ## 6 Outlook In this article, we have derived a better approximation for the MDLG equilibrium distribution function. It deviates from the previous best approximation by Parsa et al. [15] in a broad transition region between the ballistic and diffusive regime of random particle displacements. Despite the fact that these deviations are small, we expect them to be of great importance in the analysis of non-equilibrium systems, particularly systems not too far from equilibrium, as is typical in hydrodynamic systems. What we have outlined here is the equilibrium behavior of the MDLG mapping of a molecular dynamics simulation onto a lattice gas. The key interest, however, lies in the non-equilibrium predictions of this mapping. In future research, we will investigate MDLG predictions for lattice gas and lattice Boltzmann collision operators. In such systems we expect to find only small deviations from local equilibrium, and to quantify these small deviations it is essential to have a very good understanding of the equilibrium behavior of the MDLG mapping. This manuscript has no further supporting data. AW supervised the research, contributed to it, and revised the manuscript. AP contributed to the research, embedded the proposed model, set up the test cases, performed the data analysis and wrote the manuscript. All authors read and approved the manuscript. The authors declare that they have no competing interests. AP is partially supported by the Center for Nonlinear Studies (CNLS) and the Laboratory Directed Research and Development (LDRD) program at Los Alamos National Laboratory (LANL), and the German Federal Ministry of Education and Research (BMBF) in the scope of the project Aerotherm (reference numbers: 01IS16016A-B). ## References * [1] LAMMPS Official Website: http://lammps.sandia.gov. * Adhikari et al., [2005] Adhikari, R., Stratford, K., Cates, M. E., and Wagner, A. J. (2005). Fluctuating lattice boltzmann. Europhysics Letters (EPL), 71(3):473–479. * Blommel and Wagner, [2018] Blommel, T. and Wagner, A. J. (2018). Integer lattice gas with Monte Carlo collision operator recovers the lattice Boltzmann method with Poisson-distributed fluctuations. Physical Review E, 97(2):023310. * Briant et al., [2004] Briant, A., Wagner, A., and Yeomans, J. (2004). Lattice boltzmann simulations of contact line motion. i. liquid-gas systems. Physical Review E, 69(3):031602. * Dünweg et al., [2007] Dünweg, B., Schiller, U. D., and Ladd, A. J. C. (2007). Statistical mechanics of the fluctuating lattice boltzmann equation. Phys. Rev. E, 76:036704. * Frisch et al., [1986] Frisch, U., Hasslacher, B., and Pomeau, Y. (1986). Lattice-Gas Automata for the Navier-Stokes Equation. Physical Review Letters, 56(14):1505–1508. * He et al., [1998] He, X., Chen, S., and Doolen, G. D. (1998). A novel thermal model for the lattice boltzmann method in incompressible limit. Journal of computational physics, 146(1):282–300. * He and Luo, [1997] He, X. and Luo, L.-S. (1997). Theory of the lattice boltzmann method: From the boltzmann equation to the lattice boltzmann equation. Physical Review E, 56(6):6811. * Kullback and Leibler, [1951] Kullback, S. and Leibler, R. A. (1951). On information and sufficiency. The annals of mathematical statistics, 22(1):79–86. * Ladd, [1993] Ladd, A. J. C. (1993). Short-time motion of colloidal particles: Numerical simulation via a fluctuating lattice-boltzmann equation. Phys. Rev. Lett., 70:1339–1342. * McNamara et al., [1995] McNamara, G. R., Garcia, A. L., and Alder, B. J. (1995). Stabilization of thermal lattice boltzmann models. Journal of Statistical Physics, 81(1-2):395–408. * Osborn et al., [1995] Osborn, W., Orlandini, E., Swift, M. R., Yeomans, J., and Banavar, J. R. (1995). Lattice boltzmann study of hydrodynamic spinodal decomposition. Physical review letters, 75(22):4031. * Pachalieva and Wagner, [2020] Pachalieva, A. and Wagner, A. J. (2020). Non-Gaussian distribution of displacements for Lennard-Jones particles in equilibrium. Phys. Rev. E, 102:053310. * Parsa et al., [2019] Parsa, M. R., Pachalieva, A., and Wagner, A. J. (2019). Validity of the molecular-dynamics-lattice-gas global equilibrium distribution function. International Journal of Modern Physics C, 30(10):1941007. * Parsa and Wagner, [2017] Parsa, M. R. and Wagner, A. J. (2017). Lattice gas with molecular dynamics collision operator. Physical Review E, 96(1):013314. * Parsa and Wagner, [2020] Parsa, M. R. and Wagner, A. J. (2020). Large fluctuations in nonideal coarse-grained systems. Phys. Rev. Lett., 124:234501. * Plimpton, [1995] Plimpton, S. (1995). Fast Parallel Algorithms for Short-Range Molecular Dynamics. Journal of Computational Physics, 117(1):1–19. * Potoff and Panagiotopoulos, [1998] Potoff, J. J. and Panagiotopoulos, A. Z. (1998). Critical point and phase behavior of the pure fluid and a lennard-jones mixture. The Journal of chemical physics, 109(24):10914–10920. * Qian et al., [1992] Qian, Y. H., D’Humières, D., and Lallemand, P. (1992). Lattice BGK Models for Navier-Stokes Equation. Europhysics Letters (EPL), 17(6):479–484. * Shan and Doolen, [1995] Shan, X. and Doolen, G. (1995). Multicomponent lattice-boltzmann model with interparticle interaction. Journal of Statistical Physics, 81(1-2):379–393. * Wagner and Strand, [2016] Wagner, A. J. and Strand, K. (2016). Fluctuating lattice boltzmann method for the diffusion equation. Phys. Rev. E, 94:033302.
8k
arxiv_papers
2101.01164
††thanks: Equal contribution.††thanks: Equal contribution. # Stoichiometry controls the dynamics of liquid condensates of associative proteins Pierre Ronceray Center for the Physics of Biological Function, Princeton University Yaojun Zhang Center for the Physics of Biological Function, Princeton University Xichong Liu Department of Chemical and Biological Engineering, Princeton University Stanford University School of Medicine Ned S. Wingreen [email protected] Department of Molecular Biology, Princeton University Lewis-Sigler Institute for Integrative Genomics, Princeton University ###### Abstract Multivalent associative proteins with strong complementary interactions play a crucial role in phase separation of intracellular liquid condensates. We study the internal dynamics of such “bond-network” condensates comprised of two complementary proteins via scaling analysis and molecular dynamics. We find that when stoichiometry is balanced, relaxation slows down dramatically due to a scarcity of alternative partners following a bond break. This microscopic slow-down strongly affects the bulk diffusivity, viscosity and mixing, which provides a means to experimentally test our predictions. Protein-rich liquid condensates, also known as membraneless organelles, have recently emerged as an important paradigm for intracellular organization Brangwynne _et al._ (2009); Brangwynne (2013); Banani _et al._ (2017). Several distinct molecular mechanisms involved in condensate phase separation have been characterized Dignon _et al._ (2020), including weak interactions between intrinsically disordered regions of proteins, interactions with RNA and DNA, and specific protein-to-protein complementary interactions. Here we focus on the latter mechanism, often described in terms of “sticker-and- spacer” models Choi _et al._ (2020), where strongly interacting complementary “stickers” are separated by flexible “spacers”, which have little to no interactions. In a simple case, only two species are involved with complementary sticker domains (Fig. 1a), and the phase-separated liquid consists of a dynamically rearranging network of these bound domains (Fig. 1b). This paradigm of a binary mixture of complementary proteins has been observed in membraneless organelles such as the algal pyrenoid Freeman Rosenzweig _et al._ (2017), as well as in artificial protein condensates such as SUMO-SIM assemblies Banani _et al._ (2016). Recent studies show that such binary liquids characterized by strong complementary interactions differ in their properties from usual, non- biological liquids: for instance, their valence sensitively controls their phase boundary through a “magic number” effect Freeman Rosenzweig _et al._ (2017); Xu _et al._ (2020); Zhang _et al._ (2020), and they can exhibit long-lived metastable clusters prior to macroscopic phase separation following a quench Ranganathan and Shakhnovich (2020). Little is known however about the bulk dynamical properties of these liquids. It is expected that these liquids will inherit some properties of associative polymers—a class of materials characterized by long chains with sparse sticky sites Rubinstein and Dobrynin (1997). In these materials, relaxation is slowed down by the attachment- detachment dynamics of binding sites, resulting in _sticky reptation_ Zhang _et al._ (2018). However, the corresponding role of attachment-detachment dynamics has not yet been considered in liquid protein condensates. In this Letter, we study the bulk relaxation mechanisms of liquids consisting of a binary mixture of multivalent complementary proteins (Fig. 1a-b). With theory and simulations, we show that even in such simple systems, the strong specificity of interactions results in a finely tuned response to changes in composition—a property that cells might exploit to dynamically adapt the mixing properties of condensates. We first present a simple kinetic model that predicts a strong dependence of the local relaxation time of bonds on the composition of the liquid: at equal stoichiometry of complementary domains, we anticipate a sharp peak in the relaxation time. We then employ molecular dynamics simulations to confirm these predictions and show their striking consequences for the bulk diffusivity and the overall viscosity of the liquid. Finally, we demonstrate that this effect quantitatively and qualitatively affects the mixing dynamics of droplets of different compositions, and propose experimental ways to test our theoretical predictions. ## Kinetic model for the bond relaxation. We consider the dense phase of multivalent proteins of two different types, denoted A and B (Fig. 1a), where each domain can bind to one and only one domain of the complementary type. The free energy favoring formation of such a bond is $\Delta F$, with a corresponding unbinding Arrhenius factor $\epsilon=\exp(-\Delta F)$ (we set the thermal energy $k_{B}T=1$ throughout). We consider the strong-binding regime, _i.e._ $\epsilon\ll 1$. In this regime, the system at any time looks like a gel-forming network with most bonds between domains satisfied (Fig. 1b). However, over sufficiently long times, bonds still break and rearrange, the system relaxes, and the system can flow as a liquid. We investigate here the dependence of this relaxation time on the Arrhenius factor $\epsilon$ and on the composition of the liquid. In the strong-binding regime, local relaxation is controlled by individual bond breaking (Fig. 1c). This process is slow and thermally activated, occurring at a dissociation rate $k_{d}=\epsilon/\tau_{0}$ where $\tau_{0}$ is a microscopic relaxation time, and these events are rapidly followed by rebinding. However, the two newly unbound complementary domains are part of the network, and thus are not free: they remain confined and diffuse only in a small volume $v_{\mathrm{cage}}$ around their initial position (Fig. 1d). This caging volume is determined by the length and flexibility of linkers. Subsequent to a bond breaking, there is therefore a high probability the two former partners will rebind to each other, thus negating the effect of the bond break on system relaxation. Only if either of the two finds a new, unbound complementary domain within the cage volume (Fig. 1e) does the initial break contribute to system relaxation and liquidity. If we denote by $p$ the probability that either domain finds a new partner, the effective relaxation time can thus be approximated as $\tau_{\mathrm{rel}}=1/(pk_{d})$. To estimate the probability $p$, we note that if there are on average $n$ free domains in the volume $v_{\mathrm{cage}}$, the probability of finding a new partner prior to rebinding to the former can be approximated as $p=n/(1+n)$. We can then express $n=v_{\mathrm{cage}}c_{\mathrm{free}}$ in terms of the concentration $c_{\mathrm{free}}=c_{\mathrm{A}}+c_{\mathrm{B}}$ of unbound domains in the system, where we denote by $c_{\mathrm{A}}$ and $c_{\mathrm{B}}$ the respective concentration of free domains of each type. We define the stoichiometry difference $\delta=c_{\mathrm{A}}-c_{\mathrm{B}}$ as the difference between these concentrations (which depends only on the overall composition, not on the fraction bound), and $c_{\mathrm{AB}}$ as the concentration of bound domain pairs. We assume that the linkers are sufficiently flexible to consider the binding state of each domain of a protein as independent of the others, and thus treat the binding-unbinding process as a well-mixed solution. The dissociation equilibrium reads $K_{d}=c_{\mathrm{A}}c_{\mathrm{B}}/c_{\mathrm{AB}}$, with $K_{d}$ the dissociation constant. We thus have: $c_{\mathrm{free}}=\sqrt{\delta^{2}+4K_{d}c_{\mathrm{AB}}}.$ (1) The concentration of free monomers thus exhibits a global minimum at $\delta=0$ (Fig. 1f). We relate the dissociation constant to the Arrhenius factor for unbinding, writing $K_{d}=\epsilon/v_{0}$ where $v_{0}$ is a molecular volume. Indeed, $K_{d}=k_{d}/k_{a}$ where the dissociation rate $k_{d}=\epsilon/\tau_{0}$ is proportional to the Arrhenius factor, assuming that the association rate $k_{a}$ is independent of the binding strength. We can thus express the relaxation time as: $\tau_{\mathrm{rel}}=\frac{\tau_{0}}{\epsilon}\left(1+\frac{1}{v_{\mathrm{cage}}\sqrt{\delta^{2}+4\epsilon c_{AB}/v_{0}}}\right).$ (2) When $n\ll 1$, _i.e._ when there are few available partners within reach of a domain, the second term in Eq. 2 dominates the relaxation time. In particular, $\tau_{\mathrm{rel}}$ exhibits a sharp maximum at $\delta=0$, whose magnitude scales as $\tau_{\mathrm{rel}}\propto\epsilon^{-3/2}$. This corresponds to correlated dissociation events: neither of the two domain types is in excess with respect to the other, and so rebinding to a new partner is conditioned on finding another thermally activated unbound domain within $v_{\mathrm{cage}}$. The concentrations of such unbound domains are $c_{\mathrm{A}}=c_{\mathrm{B}}=\sqrt{K_{d}c_{\mathrm{AB}}}\propto\epsilon^{1/2}$. In contrast, for $\delta\gg 1/v_{\mathrm{cage}}$ such that $n\gg 1$, binding to a new partner is fast and essentially independent of $\delta$, so that $\tau_{\mathrm{rel}}\propto\epsilon^{-1}$. This scaling behavior is our central prediction, and is illustrated in Fig. 1g. Figure 1: Stoichiometry controls the bond relaxation time of multivalent associative proteins. (a) Sketch of associative multivalent proteins, with complementary domains separated by flexible linkers. (b) Strong yet reversible binding between proteins leads them to condense into a network with most bonds satisfied. (c-e) Schematic of the bond relaxation mechanism. When two initially bound domains (c) unbind, the two are caged in a small volume $v_{\mathrm{cage}}$ (d). Two events can then occur: the initially bound domains can rebind, or, if a free domain is within reach, a new bond may form (e) which is the system’s basic relaxation mechanism. (f) Fraction of unbound domains (Eq. 1) of both types as a function of stoichiometry difference. (g) Relaxation time (Eq. 2) corresponding to the process of unbinding and then rebinding with a new partner (c-e), as a function of stoichiometry difference. Here $\epsilon=e^{-\Delta F}$. ## Molecular dynamics simulations. Figure 2: Molecular Dynamics simulations reveal the importance of stoichiometry to the dynamical properties of the condensate. (a) MD model for the multivalent associative proteins. Colored spheres represent A and B domains. (b) Representative snapshot of the dense, network-forming liquid condensate. (c) Bond relaxation time (see text) as a function of stoichiometry for different binding strengths. Symbols indicate MD simulations; solid curves indicate theory (Eq. 2) with $v_{\mathrm{cage}}=2.2$ (fitted, consistent with the plateau of MSD in (e)), $\tau_{0}=1.0$ (corresponding to the unbinding time in the absence of any interaction), and $K_{d}=c_{\mathrm{A}}c_{\mathrm{B}}/c_{\mathrm{AB}}$ measured from data at $\delta=0$. (d) Bond relaxation time $\tau_{\mathrm{rel}}$ as a function of binding strength is consistent with predicted scaling for both equal and unequal stoichiometries (Eq. 2, Fig. 1g). (e) Mean squared displacement (MSD) of individual domains as a function of time reveals diffusive scaling (dashed line) at long times (here $\delta=0$). (f) Diffusion coefficient of the minority species as a function of binding strength at equal and unequal stoichiometry. (g) Diffusion coefficient plotted against bond relaxation time, for all values of $\delta$ and $\Delta F$. The dotted black line indicates $D\propto\tau_{\mathrm{rel}}^{-1}$. Transparent circles correspond to systems where one component is in large excess, $|\delta|>0.2c_{\mathrm{tot}}$, for which disconnected proteins dominates the diffusivity. (h) Viscosity, obtained using the Green-Kubo relation, as a function of binding strength, shows similar scaling to the bond relaxation time (d). We employ molecular dynamics simulations to test our theoretical predictions for the relaxation time (Eq. 2). Specifically, we model the system schematized in Fig. 1a-b using a bead-spring representation, where only the binding domains are simulated explicitly (Fig. 2a). Binding between complementary domains is modeled by a soft attractive potential minimized when the beads fully overlap, while strong repulsion between beads of the same type prevents the formation of multiple bonds involving the same domain (see Methods). The range of the repulsive interaction between domains sets the unit of length, while the unit of time is chosen to be the average time it takes for a free domain to diffuse a unit length. We simulate only the dense phase of this phase-separating system (Fig. 2b). The control parameters are the binding free energy $\Delta F$ and the stoichiometric difference $\delta=c_{\mathrm{A}}-c_{\mathrm{B}}$, while the total concentration of domains $c_{\mathrm{tot}}$ is held fixed. Simulations are performed using LAMMPS noa ; Plimpton (1995) (see Methods). We first study the relaxation of individual bonds. To quantify this relaxation, we compute the bond adjacency matrix $A_{ij}(t)$, where $A_{ij}(t)=1$ if domains $i$ and $j$ are bound at time $t$, and $0$ otherwise. We first obtain the average autocorrelation function of this matrix, $C(\Delta t)=\langle\sum_{i,j}A_{ij}(t)A_{ij}(t+\Delta t)\rangle_{t}$, where the sum runs over all pairs of complementary domains, and then extract the bond relaxation time $\tau$ by integration of the normalized autocorrelation, $\tau=\int_{0}^{\infty}C(\Delta t)d\Delta t/C(0)$. The resulting relaxation time $\tau$ is plotted as a function of stoichiometry difference $\delta=c_{\mathrm{A}}-c_{\mathrm{B}}$ for different values of $\Delta F$ in Fig. 2c (symbols). These values are in good agreement with the theoretical prediction of Eq. 2 (solid curves), and in particular exhibit a clear maximum at equal stoichiometry ($\delta=0$). The magnitude and sharpness of the peak increases with the binding free energy $\Delta F$. Furthermore, we confirm in Fig. 2d that $\tau$ scales as $\epsilon^{-3/2}=\exp(3\Delta F/2)$ at equal stoichiometry, and as $\epsilon^{-1}=\exp(\Delta F)$ at unequal stoichiometry. Thus, the relaxation time increases much faster with $\Delta F$ at equal stoichiometry, in agreement with our analytical prediction (Eq. 2). ## Diffusivity and viscosity. How does this sizable difference in relaxation times influence macroscopic condensed-phase properties such as diffusivity and viscosity? To answer these questions, we first monitor the mean squared displacement (MSD) of individual binding domains as a function of lag time (Fig. 2e). Several distinct regimes are apparent in the MSD: short times correspond to bond-level vibrations, the plateau at intermediate times reveals caging within the bonded network, while the long-time scaling $\mathrm{MSD}\propto\Delta t$ is diffusive, confirming that the system behaves as a liquid. We extract the long-time diffusion coefficient from these simulations, and find that it directly reflects the bond relaxation time, _i.e._ $D\propto 1/\tau$ (Fig. 2g), and thus scales as $\epsilon^{3/2}$ at equal stoichiometry (Fig. 2f). This shows that the slow bond relaxation within the connected network dominates the diffusive properties of the system. We note however that at large stoichiometry differences ($|\delta|>0.2c_{\mathrm{tot}}$, transparent symbols in Fig. 2g), fully unbound proteins of the majority species exist and diffuse rapidly through the network, thus violating these scaling laws. Turning to the viscosity $\eta$ of the liquid, which we measure using the Green-Kubo relation between viscosity and equilibrium stress fluctuations Todd and Daivis (2017), we observe similarly that $\eta\propto\tau$ (Fig. 2h). The macroscopic transport properties of this binary liquid thus directly reflects the highly stoichiometry-dependent molecular relaxation mechanism illustrated in Fig. 1: in the strong-binding regime, the viscosity of the liquid dramatically increases near equal stoichiometry. ## Mixing dynamics. Our predictions for the dependence of bulk transport coefficients on the stoichiometry of the associative protein condensate have experimentally testable consequences. For instance, by preparing an homogeneous droplet and fluorescently tagging domain on one side, one could measure the mixing dynamics as a function of the composition. We simulate the relaxation of the composition profile for this case by putting in contact two simulation boxes (Fig. 3a-b). We monitor the relaxation of the tagged composition difference between the two halves of the simulation box (Fig. 3c) and extract the relaxation time by exponential fitting of the decay curve (Fig. 3d). Consistent with our equilibrium analysis, we find that mixing is much faster when a species is in excess (Fig. 3d, squares) than when stoichiometry is balanced (Fig. 3d, circles). Interestingly, if the two boxes had initially distinct compositions, mixing is significantly faster: indeed, the gradient of bound fraction of the domain results in a strong chemical potential gradient, and thus in a large thermodynamic force restoring compositional homogeneity. Figure 3: Composition controls mixing rate near equal stoichiometry. (a) Snapshot of an MD simulation with initially tagged particles on the left side of the box. (b) Concentration profiles for tagged particles along the long axis at different times, for equal stoichiometry $\delta=0$, showing slow relaxation towards the homogeneous state. (c) Relaxation of the tagged concentration difference between the two half-boxes, for variable binding free energy. (d) Equilibration time as a function of binding strength. The unbalanced case has $\delta=0.061$. ## Discussion. In this Letter, we investigated the dynamics of protein-rich condensates characterized by strong, specific interactions between complementary binding sites. Our theoretical analysis of the molecular-level relaxation mechanisms in these liquids suggests a strong composition dependence: near equal stoichiometry of complementary binding sites, the dynamics of the liquid dramatically slows down. This slowing is due to the lack of free binding sites at equal composition, which leads to a predominance of rebinding following bond breaks. We confirmed this mechanism through molecular dynamics simulations and showed that it controls the equilibrium diffusivity and viscosity of the liquid network. The molecular-level connectivity relaxation of protein liquids through binding-unbinding events is generally not directly accessible in experiments. By contrast, our predictions for macroscopic transport quantities are readily testable, for instance using engineered protein condensates such as SUMO-SIM Banani _et al._ (2016) and SH3-PRM Li _et al._ (2012) systems. Our predictions would also hold in other liquids characterized by strong specific interactions, such as in highly controllable DNA nanoparticles Conrad _et al._ (2019). In such systems, the effect of composition on diffusivity could be observed using fluorescence recovery after photobleaching Taylor _et al._ (2019) as in Fig. 3 and nanoparticle tracking Feric _et al._ (2016), while our predictions on viscosity and mixing dynamics could be tested by monitoring the shape relaxation of merging droplets Ghosh and Zhou (2020). While the dynamics of protein condensates can be regulated by many factors, such as density Kaur _et al._ (2019); Ghosh and Zhou (2020), salt concentration, and the presence of RNA Elbaum-Garfinkle _et al._ (2015), our work highlights the possibility that cells can also fine-tune the mechanical and dynamical properties of their membraneless organelles through small changes in composition. Beyond controlling the time scale of internal mixing and merging of these droplets, stoichiometry-dependent slowing could also impact the mobility exchange rates of “clients” – constituents of the condensates that do not contribute directly to phase separation, but may be functionally important for the cell Banani _et al._ (2017). Overall, we have shown that high specificity liquids have unusual physical properties and provide novel avenues that cells could use to regulate their phase-separated bodies. ### _Methods_. Molecular Dynamics simulations are performed using the March 2020 version of LAMMPS noa . Proteins of type A and B are represented by bead-spring multimers with respectively 6 and 4 binding domains (chosen with different valency to avoid magic-number effects associated with the formation of stable dimers Freeman Rosenzweig _et al._ (2017); Xu _et al._ (2020); Zhang _et al._ (2020)). Simulations are done in the NVE ensemble using a Langevin thermostat, with energy normalized so that $k_{B}T=1$. Links between domains in a given protein are modeled as finite extensible nonlinear elastic bonds, with interaction potential $E(r)=-0.5KR_{0}^{2}\log\left[1-(r/R_{0})^{2}\right]$ as a function of bond elongation $r$, with coefficients $K=3$ and $R_{0}=3$. Interaction between domains of the same type are given by a repulsive truncated Lennard-Jones potential, $E(r)=4\varepsilon\left[\left(\frac{\sigma}{r}\right)^{12}-\left(\frac{\sigma}{r}\right)^{6}\right]$ with $\varepsilon=1$, $\sigma=1$ (which sets the unit of length), and cutoff at $R=2^{1/6}$. Binding between complementary domains occurs via a soft potential, $E(r)=A\left(1+\cos(\pi r/r_{c})\right)$ for $r<r_{c}$, with cutoff $r_{c}=0.5$. Energy is minimized when domains fully overlap, and Lennard-Jones repulsive interaction between domains of the same type ensured that binding is one-to-one. The interaction strength $A$ is related to the binding free energy by $\Delta F=-\ln\left(\int_{0}^{r_{c}}4\pi r^{2}e^{-E(r)}dr/(4\pi r_{c}^{3}/3)\right)$. We set the average time it takes for an unbound domain to diffuse a unit length to be the unit of time, $\tau_{0}=1$. The simulation step is $\delta t=0.0176$. We simulate only the dense phase, with periodic boundary conditions (box size: $14^{3}$ for Fig. 2, $42\times 14\times 14$ for Fig. 3) and density typical of a demixed droplet with free surface. The total concentration $c_{\mathrm{tot}}=1.05$ of domains is kept fixed while the stoichiometry $\delta$ is varied. To ensure equilibration of the system, the attraction strength $A$ is annealed from zero to its final value over a time of $5\tau$, where $\tau$ is the bond relaxation time. The system then evolves for another $5\tau$, prior to measurements performed over $20\tau$. In Fig. 2, measurements of $\tau$, MSD, and $D$ have $N=5$ repeats; measurements of $\eta$ have $N=20$. Statistical error bars are smaller than the symbol size. In Fig. 3, the system is initially annealed with walls separating the two halves of the system, with different labels for domains in either side. At $t=0$, the walls are removed and mixing starts. ### _Acknowledgments_. This work was supported in part by the National Science Foundation, through the Center for the Physics of Biological Function (PHY-1734030). ## References * Brangwynne _et al._ (2009) C. P. Brangwynne, C. R. Eckmann, D. S. Courson, A. Rybarska, C. Hoege, J. Gharakhani, F. Jülicher, and A. A. Hyman, Science 324, 1729 (2009). * Brangwynne (2013) C. P. Brangwynne, The Journal of Cell Biology 203, 875 (2013). * Banani _et al._ (2017) S. F. Banani, H. O. Lee, A. A. Hyman, and M. K. Rosen, Nature Reviews Molecular Cell Biology 18, 285 (2017). * Dignon _et al._ (2020) G. L. Dignon, R. B. Best, and J. Mittal, Annual Review of Physical Chemistry 71, 53 (2020). * Choi _et al._ (2020) J.-M. Choi, A. S. Holehouse, and R. V. Pappu, Annual Review of Biophysics 49, 107 (2020). * Freeman Rosenzweig _et al._ (2017) E. S. Freeman Rosenzweig, B. Xu, L. Kuhn Cuellar, A. Martinez-Sanchez, M. Schaffer, M. Strauss, H. N. Cartwright, P. Ronceray, J. M. Plitzko, F. Förster, N. S. Wingreen, B. D. Engel, L. C. M. Mackinder, and M. C. Jonikas, Cell 171, 148 (2017). * Banani _et al._ (2016) S. F. Banani, A. M. Rice, W. B. Peeples, Y. Lin, S. Jain, R. Parker, and M. K. Rosen, Cell 166, 651 (2016). * Xu _et al._ (2020) B. Xu, G. He, B. G. Weiner, P. Ronceray, Y. Meir, M. C. Jonikas, and N. S. Wingreen, Nature Communications 11, 1561 (2020). * Zhang _et al._ (2020) Y. Zhang, B. Xu, B. G. Weiner, Y. Meir, and N. S. Wingreen, bioRxiv , 2020.08.24.264655 (2020). * Ranganathan and Shakhnovich (2020) S. Ranganathan and E. I. Shakhnovich, eLife 9, e56159 (2020). * Rubinstein and Dobrynin (1997) M. Rubinstein and A. V. Dobrynin, Trends polym. sci. (Regul. ed.) 5, 181 (1997). * Zhang _et al._ (2018) Z. Zhang, Q. Chen, and R. H. Colby, Soft Matter 14, 2961 (2018). * (13) “LAMMPS Molecular Dynamics Simulator. Available from https://lammps.sandia.gov/,” . * Plimpton (1995) S. Plimpton, Journal of Computational Physics 117, 1 (1995). * Todd and Daivis (2017) B. D. Todd and P. J. Daivis, _Nonequilibrium Molecular Dynamics: Theory, Algorithms and Applications_ (Cambridge University Press, Cambridge, 2017). * Li _et al._ (2012) P. Li, S. Banjade, H.-C. Cheng, S. Kim, B. Chen, L. Guo, M. Llaguno, J. V. Hollingsworth, D. S. King, S. F. Banani, P. S. Russo, Q.-X. Jiang, B. T. Nixon, and M. K. Rosen, Nature 483, 336 (2012). * Conrad _et al._ (2019) N. Conrad, T. Kennedy, D. K. Fygenson, and O. A. Saleh, Proceedings of the National Academy of Sciences 116, 7238 (2019). * Taylor _et al._ (2019) N. O. Taylor, M.-T. Wei, H. A. Stone, and C. P. Brangwynne, Biophysical Journal 117, 1285 (2019). * Feric _et al._ (2016) M. Feric, N. Vaidya, T. S. Harmon, D. M. Mitrea, L. Zhu, T. M. Richardson, R. W. Kriwacki, R. V. Pappu, and C. P. Brangwynne, Cell 165, 1686 (2016). * Ghosh and Zhou (2020) A. Ghosh and H.-X. Zhou, Angewandte Chemie International Edition 59, 20837 (2020). * Kaur _et al._ (2019) T. Kaur, I. Alshareedah, W. Wang, J. Ngo, M. M. Moosa, and P. R. Banerjee, Biomolecules 9, 71 (2019). * Elbaum-Garfinkle _et al._ (2015) S. Elbaum-Garfinkle, Y. Kim, K. Szczepaniak, C. C.-H. Chen, C. R. Eckmann, S. Myong, and C. P. Brangwynne, Proceedings of the National Academy of Sciences 112, 7189 (2015).
4k
arxiv_papers
2101.01167
11institutetext: Department of Physics, Bose Institute, 93/1 Acharya Prafulla Chandra Road, Kolkata 700009, India Decision theory and game theory Networks and genealogical trees # Scale-free networks may not necessarily witness cooperation Deep Nath 11 Saptarshi Sinha 11 Soumen [email protected] 1111 ###### Abstract Networks with a scale-free degree distribution are widely thought to promote cooperation in various games. Herein, by studying the well-known prisoner’s dilemma game, we demonstrate that this need not necessarily be true. For the very same degree sequence and degree distribution, we present a variety of possible behaviour. We reassess the perceived importance of hubs in a network towards the maintenance of cooperation. We also reevaluate the dependence of cooperation on network clustering and assortativity. ###### pacs: 02.50.Le ###### pacs: 89.75.Hc Evolutionary game theory (EGT) has captured the serious attention of evolutionary biologists, ecologists, computer scientists and statistical physicists over the last few decades. This is chiefly due to its potential to effectively understand the challenge of evolution and maintenance of cooperation from microscopic to macroscopic scales in the Darwinian context. In classical game theory, players are rational individuals who can choose their own strategy [1]. This decision-making ability enables players to maximize their payoff in games. However, EGT differs from classical game theory in that individuals are not driven by rationality per se and may not require complete information about other players [2]. Here, players are genetically constrained to perform a specific strategy [3], which naturally discourages “mixing” of strategies. Such invariant strategies are therefore also referred to as pure strategies. EGT deals with interactions between two or more genetically distinct populations sharing common resources and other environmental factors. One of the goals of EGT is to model population dynamics on evolutionary time scales. Here, the population structure in the steady- state is comprised of players having evolutionary stable strategies (ESS). Evolutionary stability refers to such genetic compositions in which no other mutant genotype can successfully invade a population by evolutionary processes like natural selection [4, 5, 6]. EGT enables us to investigate various evolutionary processes by knowing the frequency-dependent steady-state outcome of two or more interacting populations. Prime factors in evolutionary games include the strategy of players and game rules. Cooperation between living organisms may flourish irrespective of the presence of free-riders [7, 8, 9]. EGT has been studied primarily on four types of games: prisoner’s dilemma (PD), harmony, snowdrift and coordination, which differ in their payoff values and steady states [5]. In the last few decades, much research has been done on the maintenance of cooperation in various games. Among these, PD is significant because defection would be the natural tendency in PD[10]. Apart from game rules, the underlying structure of the population also plays an essential role in the outcome of the game [11, 12, 13, 14, 15, 16, 17, 18]. The underlying graph topology imparts spatial restrictions on the interactions between players. These spatial restrictions may act in favour of cooperation. In PD games played on homogeneous population structures, it is difficult to maintain cooperation [6, 19]. On the other hand, cooperation could thrive in heterogeneous populations. Thus, the outcome of a game depends on the structure of the population, types of payoffs and sundry factors like mobility [20]. Networks have been found to be useful in fields [21, 22] as diverse as mutagenesis and phage resistance [23], image-processing and non-invasive diagnostics [24], infrastructure [25] and optogenetics[26, 27]. While degree is only one of the many metrics in networks [21, 22], it has received perhaps the most emphasis in network literature [28]. Graphs with power-law degree distributions have been generally alluded to as “scale-free networks” in the literature [29, 28, 30, 31]. Heterogeneity in scale-free (SF) networks can be better understood through measures such as the S-metric [32, 33]. As is well- known, the mechanism of generation [29, 34] can imprint its signature on the structure of the network [32]. It has been reported earlier that scale-free networks possess an inherent tendency to promote cooperation [35]. The underlying intuition seems to be that when cooperators are hubs, they can survive in a population by accumulating higher payoffs as compared to their defecting neighbours [36, 37]. It has also been thought that factors like clustering and assortativity could influence this outcome, as a higher clustering coefficient and high assortativity between cooperators may enhance cooperation [38, 39, 40, 41, 42]. Herein, we demonstrate that these need not necessarily be true. Indeed, for the very same degree sequence and degree distribution, we demonstrate that SF networks may display a rich diversity in behaviour with regard to cooperation. Let $\cal{G(V,E)}$ denote a graph, where $\cal V$ and $\cal E$ denote the set of nodes and edges respectively. $|{\cal V}|=\cal N$ and $|\cal E|$ denotes the number of nodes and edges respectively in $\cal{G(V,E)}$. Henceforth, we often refer to $\cal{G(V,E)}$ as $\cal{G}$. We now define $S=\frac{s({\cal G})}{s_{max}}=\frac{\sum_{{\cal E}_{ij}\in{\cal E}}{k_{i}}{k_{j}}}{s_{max}}$ (1) Here, $i$ and $j$ are the end nodes of the edge ${\cal E}_{ij}\in{\cal E}$. The degree of node $i$ and $j$ is denoted by $k_{i}$ and $k_{j}$ respectively. If $\cal K$ denotes the degree sequence of ${\cal G}$, let ${\cal G}(\cal K)$ denote the set of graphs with degree sequence $\cal K$. $s_{max}=max\\{s({\cal G}):{\cal G}\in{\cal G}(\cal K)\\}$, whence $0<S({\cal G})\leq 1$. Only a completely disconnected graph has $S=0$ and is therefore excluded herein. If graphs having different values of $S({\cal G})$ possess the same degree sequence – their degree distribution is obviously identical. Herein, $S({\cal G})$ is used to represent different graphs with identical degree sequence and hence identical degree distribution [32]. Henceforth, we mostly refer to $S({\cal G})$ simply as $S$. $S({\cal G})$ can be defined for virtually any graph. However, its usefulness is readily apparent and has been widely used to differentiate between various SF networks [32, 43], which is the prime object of study in this letter. Degree assortativity, $r$, broadly captures whether nodes having similar degree are connected to each other [43, 44]. $r=\frac{[\sum_{{\cal E}_{ij}\in{\cal E}}{k_{i}}{k_{j}}]-[{\sum_{i\in{\cal V}}\frac{{k_{i}}^{2}}{2}}]^{2}/|{\cal E}|}{[{\sum_{i\in{\cal V}}\frac{{k_{i}}^{3}}{2}}]-[{\sum_{i\in{\cal V}}\frac{{k_{i}}^{2}}{2}}]^{2}/|{\cal E}|}$ (2) As well known, $-1\leq r\leq 1$. Graphs with positive and negative values of $r$ are termed assortative and disassortative respectively. In assortative graphs, nodes with higher degree are predominantly connected to each other. In disassortative graphs, nodes with higher degree are predominantly connected to nodes with lower degree. $S(\cal G)$ reflects the extent to which a given graph is scale-free [32, 43]. $\forall\cal G\in\\{\cal G(\cal K)\\}$ possess a strictly identical degree sequence, by definition. Herein, we are not only interested in graphs with the same power-law degree distribution, but we are additionally interested in graphs with an identical degree sequence. Therefore, in this letter $S$ and not $r$ is the natural and obvious choice to perform the role of the key structural index. We simulate the evolutionary PD game on heterogeneous populations. The population structure has been initially considered as a Barabási-Albert (BA) network. BA networks can be generated through the mechanism of preferential attachment [29, 34]. They exhibit a power-law degree distribution. The extent of prevalence of scale-free networks in the real world has been extensively discussed [30, 45]. For each ensemble, initially a BA network, $\cal G_{BA}$, is generated. From $\cal G_{BA}$, a set of scale-free networks, $\\{\cal G_{SF}\\}$ is obtained by repeated degree preserving double-edge swap [46, 47]. Thus, at every step, the removal of two randomly chosen edges, ${\cal E}_{ij}$ and ${\cal E}_{kl}$, is accompanied by the creation of two new edges, ${\cal E}_{ik}$ and ${\cal E}_{jl}$, while retaining the degree of each node. We can hardly overemphasise that $\forall\cal G\in\\{\cal G_{SF}\\}$ have the same degree sequence and naturally their degree distribution is identical to that of $\cal G_{BA}$. $\forall\cal G\in\\{\cal G_{SF}\\}$ would obviously possess a value of $S$ different from $S(\cal G_{BA})$. It should be noted that no node or edge is removed or added during rewiring by degree preserving double-edge swaps. We can easily obtain the value of $max(s)$ in $\\{\cal G_{SF}\\}$. However it is not possible to achieve an arbitrarily specified low value of $S,\forall\cal G\in\\{\cal G_{SF}\\}$. The minimum obtainable value of $S$ would depend on $\cal N$, $\cal E$, and edge density of $\cal G_{BA}$ among other factors. Here, ${\cal N}=1024$ and we have been able to generate graphs with $S$ as low as $S=0.3$. Besides, generating graphs with arbitrarily low values of $S$ at a given $\cal N$ is computationally inhibitive [32]. At the start of each ensemble, the population is randomly divided into an equal number of cooperators, $C$, and defectors, $D$. Thus, the initial fraction of cooperators, $f_{C_{i}}=0.5$. Each node in ${\cal G}$ represents a player, who can interact with other players directly connected to it. Here, the strategies of the players and rules of the game do not affect the population structure, irrespective of whether the underlying network is SF or not [30, 45]. Of course, recently it has been thought that the emergence of SF networks can depend upon the proportions of different types of players present. Indeed, in some agent-based modeling frameworks, agents influence the fundamental nature of the network upon which they act, including emergence of scale-free behavior, even for a fixed set of interaction rules [48]. Interaction between two cooperators results in a reward, $\cal R$. If two defectors interact with each other, they will earn punishment, $\cal P$. On the other hand, interaction between $C$ and $D$ will lead to sucker’s payoff, $\cal S$, for $C$ and temptation, $\cal T$, for $D$. In a PD game, ${\cal T}>{\cal R}>{\cal P}>{\cal S}$ [5]. Herein, these payoff values are considered to be ${\cal R}=1.0$, $1.0<\cal{T}\leq$ $2.0$, ${\cal P}=0.0$ and ${\cal S}=0.0$ [35]. In each round, both transient and counting time incorporates payoff determination and strategy upgradation processes. Initially, players would accumulate payoff depending on interaction with their neighbors. If an individual, $i$, interacts with a randomly chosen neighbor, its payoff is $\pi_{ij}$. Generally $\pi_{ij}\neq\pi_{ji}$. The value of $\pi_{ij}$ would be $\cal R$, $\cal T$, $\cal P$ or $\cal S$. The accumulated payoff of $i$ is $\Pi_{i}=\sum_{j}\pi_{ij}$. After payoff determination, individuals will update their strategy synchronously. Let $\Pi_{i}$ and $\Pi_{j}$ denote the accumulated payoffs of $i$ and $j$ respectively. $i$ will imitate the strategy of $j$ with a probability, ${P}_{i\rightarrow j}=\frac{\Pi_{j}-\Pi_{i}}{({\cal T}-{\cal S})\times max(k_{i},k_{j})}\Theta_{\Pi_{j}\textgreater\Pi_{i}}$ (3) Here $\Theta_{\Pi_{j}\textgreater\Pi_{i}}=1$ for $\Pi_{j}\textgreater\Pi_{i}$ and zero otherwise. This condition indicates that individuals will try to maximize their payoff and $i$ will imitate $j$’s strategy only if $\Pi_{j}\textgreater\Pi_{i}$. $10^{4}$ generations of transient time have been considered in each ensemble. The final fraction of cooperators, $f_{C}$, is averaged over $10^{3}$ generations. For each network, ${\cal N}=1024$ and average degree, $\langle k\rangle=4$. Fig. 1 presents a plot of the fraction of cooperators, $f_{C}$, against temptation, $\cal T$, at different values of $S$. Higher values of $\cal T$ favor defection and result in a decrease of $f_{C}$. We also observe that the dependence of $f_{C}$ on $S$ is highly non-monotonic. Further, the maintenance of cooperation is high only in and around $S=0.4$. This demonstrates that the maintenance of cooperation in scale-free networks is not decided by the degree distribution alone. Figure 1: Fraction of cooperators, $f_{C}$, versus temptation, $\cal T$, at different values of S-metric, $S$. Results are for $f_{C_{i}}=0.5$, ${\cal N}=1024$, $\langle k\rangle=4$, $E_{\cal N}=1600$ ensembles. Cooperation is high for $S=0.4$ at all values of $\cal T$. However, for higher and lower values of $S$ cooperation is not maintained well. The standard error is smaller than the size of the data points. The complex variation of $f_{C}$ with $S$ at different values of $\cal{T}$ is demonstrated in Fig. 2. We observe that at low values of ${\cal T}$ and $S$, cooperation is well-maintained. However, at higher values of ${\cal T}$, the maintenance of cooperation is higher in and around $S=[0.35,0.45]$. Figure 2: $f_{C}$ versus $S$ at various values of $\cal T$ for scale-free (SF) networks. Red indicates the maintenance of cooperation and blue its absence. Cooperation depends on both $S$ and $\cal T$. Results are for $f_{C_{i}}=0.5$, ${\cal N}=1024$, $\langle k\rangle=4$, $E_{\cal N}=1500$ ensembles. Cooperation is largely well-maintained or ill-maintained respectively at lower and higher values of $\cal T$ and $S$. We observe that $f_{C}$ is higher in and around $S=[0.35,0.45]$. This demonstrates that the maintenance of cooperation in SF networks is not decided by the degree distribution alone. In Fig. 3(a) we examine the behaviour of $f_{C}$ with respect to $S$ at various values of $\cal T$. We again observe that $f_{C}$ is higher in and around $S=[0.35,0.45]$, as witnessed earlier in Fig. 2. It has been widely presumed that hubs are responsible for the maintenance of cooperation in heterogeneous population structures. The underlying thought seems to be that when the hubs are cooperators they can acquire higher payoffs [37]. Herein, graphs having different values of $S$ possess the same degree sequence by definition. It can then be expected that $f_{C}$ should not depend on $S$. However, from Figs. 1, 2 and 3, it can be easily observed that $f_{C}$ strongly depends on $S$. Figure 3: Fraction of cooperators, $f_{C}$, versus $S$, at various values of $\cal{T}$. While only scale-free graphs have been considered here – all of them clearly do not promote cooperation. At all values of $\cal T$, $f_{C}$ is higher in and around $S=[0.35,0.45]$. Here, $f_{C_{i}}=0.5$, ${\cal N}=1024$, $\langle k\rangle=4$, and $E_{\cal N}=1000$. The standard error is smaller than the size of the data points. Figure 4: Assortativity, $r$, versus $S$ for (a) the original graph, ${\cal G}$ and (b) cooperator graph, ${\cal G}_{C}$, and defector graph, ${\cal G}_{D}$, at ${\cal T}=1.31$. (c) Fraction of cooperators, $f_{C}$, versus $r$ at various values of ${\cal T}$. $f_{C_{i}}=0.5$, ${\cal N}=1024$, $\langle k\rangle=4$, and $E_{\cal N}=1000$. The standard error is smaller than the size of the data points. The variation of $r$ with $S$ is studied in Fig. 4(a) and is observed to be consistent with reported literature [43]. It has been postulated earlier in both two-person PD games and multi-individual public goods games that assortativity among cooperators could work in favour of cooperation [41, 42]. It has also been observed that if the entire network is assortative (in contrast to assortativity among the cooperators only), it helps in maintenance of cooperation [49, 50]. Some studies have indicated that cooperation may be sustained in disassortative networks [51, 52]. When hubs act as cooperators they can accumulate higher payoffs. Hence, cooperation can be maintained in a population. Also, assortativity between the hubs should operate in favor of cooperation as they can acquire higher payoffs as well. Therefore, cooperation should be maintained in assortative graphs which possess higher values of $S$. However, in disassortative graphs cooperation might not be maintained. Since $r$ varies linearly with $S$, it would be expected that $f_{C}$ would possess a linear dependence on $S$. It is evident from Fig. 3 that for higher and lower values of $S$, cooperation is not maintained well enough. A suitable region for the maintenance of cooperation lies somewhere between highly assortative and highly disassortative graphs. Hence, we can conclude that networks with scale-free degree distribution do not always promote cooperation. Also, hubs and assortativity between them might not really be responsible for the maintenance of cooperation. Assortativity among cooperators, $r_{C}$, can perhaps be differently scrutinised through the “cooperator graph”, ${\cal G}_{C}$, instead of the original graph, ${\cal G}$ [17]. Similarly, the “defector graph”, ${\cal G}_{D}$, may be useful to understand the assortativity between defectors, $r_{D}$. We can construct ${\cal G}_{C}$ and ${\cal G}_{D}$ from the original graph, ${\cal G}$ [17]. ${\cal G}_{C}$ and ${\cal G}_{D}$ are solely graphs of cooperators and defectors respectively among themselves. ${\cal G}_{C}$ is obtained by removing every defector and each of its connections from ${\cal G}$. Similarly ${\cal G}_{D}$ is obtained by pruning all cooperators and their connections from ${\cal G}$. ${\cal G}_{C}$ and ${\cal G}_{D}$ respectively capture the connectivity among cooperators and defectors themselves in ${\cal G}$, but not between any cooperator and defector. For completeness, in Fig. 4(b) we study the variation of $r$ versus $S$ for ${\cal G}_{C}$ and ${\cal G}_{D}$. In contrast to the linear behaviour observed in Fig. 4(a) for the full graph, ${\cal G}$, we observe a non-linear variation in ${\cal G}_{C}$ and ${\cal G}_{D}$. We have observed earlier in Fig. 3 that the maintenance of cooperation is higher in and around $S=[0.35,0.45]$ for ${\cal G}$. However, Fig. 4(b) for ${\cal G}_{C}$ and ${\cal G}_{D}$ demonstrates that the value of $r_{C}$ is enhanced at higher values of $S$. ${\cal G}_{C}$ captures purely the connections between cooperators only, while, $f_{C}$ is calculated for the full graph, ${\cal G}$. Therefore, $f_{C}$ may not be really correlated with $r_{C}$. Also defection dominates at $S=0.99$, while $r_{D}$ is higher at $S=0.9$. In Fig. 4(c), we also observe the variation of $f_{C}$ versus $r$ at different values of $\cal T$. In summary, the role of assortativity in the maintenance of cooperation in a population needs far larger scrutiny in order to arrive at a suitable conclusion. Figure 5: (a) Fraction of hubs acting as cooperators, $f_{{\cal H}_{C}}$, or defectors, $f_{{\cal H}_{D}}$, and, (b) ${\cal H}_{r}=f_{{\cal H}_{C}}/f_{{\cal H}_{D}}$; versus $S$ at ${\cal T}=1.31$. All graphs possess the same degree sequence and therefore the same number of hubs and an identical degree distribution. ${\cal H}_{r}$ peaks at $S=0.4$, alike $f_{C}$ in Fig. 3. We also observe that hubs are mostly defectors at higher $S$. Results are for $\langle k\rangle=4$, $E_{\cal N}=1000$ ensembles. The standard error is smaller than the size of the data points. Figure 6: Average clustering coefficient, $\langle{\cal C}\rangle$, versus $S$ at ${\cal T}=1.31$. Results are for (a) ${\cal N}=512$, and, (b) ${\cal N}=1024$ nodes. $\langle{\cal C}\rangle$ may not depend prominently on $S$ as $N\to\infty$. Neither has any such dependence been widely reported in literature. Figs. 1, 2 and 3 exhibit a strong dependence of $f_{C}$ on $S$. A natural question is whether and how $f_{C}$ would depend on $\langle{\cal C}\rangle$, especially if the dependence of $\langle{\cal C}\rangle$ on $S$ is minimal. Results are for $\langle k\rangle=4$ and $E_{\cal N}=1200$. The standard error is smaller than the size of the data points. In Fig. 6(a), we study the average clustering coefficient, $\langle{\cal C}\rangle$, $\forall\cal G\in\\{\cal G_{SF}\\}$ at different values of $S$. $\langle{\cal C}\rangle$ may not prominently depend on $S$ as $N\to\infty$. Neither has any such dependence of $\langle{\cal C}\rangle$ on $S$ been widely reported in literature. We have observed earlier that Figs. 1, 2 and 3 exhibit a strong dependence of $f_{C}$ on $S$. A natural question is whether and how $f_{C}$ would depend on $\langle{\cal C}\rangle$, especially if the dependence of $\langle{\cal C}\rangle$ on $S$ is minimal. Previous studies have observed that cooperation increases with an increase in network clustering [38, 39, 40]. Cooperation is known to decrease when ${\cal T}>2.5$, irrespective of the value of average clustering in the network [38]. Of course, it is also known that at higher mutation rates, even highly clustered networks may not witness cooperation [53]. However, it must also be duly noted that, while the degree distribution remained unchanged in Refs. [38, 39] – the degree sequence likely changed. Herein, we have strictly retained the degree sequence throughout. We now address the importance of hubs in a graph by studying the variation of the number of hubs and their clustering coefficient with $S$. It has been claimed that hubs mainly act as cooperators in a scale-free network and play an important role in maintaining cooperation [36]. $\forall\cal G\in\\{\cal G_{SF}\\}$ possess identical degree sequence. Therefore, $\forall\cal G\in\\{\cal G_{SF}\\}$ can be expected to possess an identical number of hubs. Let $k_{sd}$ denote the standard deviation of the degree distribution of ${\cal G}$. Herein, we consider nodes with degree greater than $\langle k\rangle+k_{sd}$ as hubs. Let us denote all hubs by ${\cal H}$ and those which act as cooperators and defectors by ${\cal H_{C}}$ and ${\cal H_{D}}$ respectively. The number of these hubs can then be denoted by ${\cal N_{H}}$, ${\cal N}_{{\cal H}_{C}}$ and ${\cal N}_{{\cal H}_{D}}$ respectively. The respective fraction of such hubs are denoted as $f_{\cal H}$, $f_{{\cal H}_{C}}$ and $f_{{\cal H}_{D}}$. The value of $f_{\cal H}$ does not depend on the value of $S$ but is decided by $\cal K$, as aforementioned. In Fig. 5(a), we study the variation of $f_{{\cal H}_{C}}$ and $f_{{\cal H}_{D}}$ with $S$. We observe that as $S$ increases, $f_{{\cal H}_{C}}$ gradually starts declining but $f_{{\cal H}_{D}}$ rises. $f_{{\cal H}_{C}}$ is higher at lower values of $S$ and responsible for the overall maintenance of cooperation in ${\cal G}$. However, as $S$ increases – hubs start adopting defection. Therefore, irrespective of the presence of hubs – cooperation is not maintained at higher values of $S$. We also study ${{\cal H}_{r}}={\cal N}_{{\cal H}_{C}}/{\cal N}_{{\cal H}_{D}}=f_{{\cal H}_{C}}/f_{{\cal H}_{D}}$ versus $S$ in Fig. 5(b). ${\cal H}_{r}$ is highest at $S=0.4$, where cooperation is also highest as already observed in Fig. 3. Hubs seem to play an important role in maintaining cooperation, when they are cooperators. However, whether they act as cooperators or defectors would depend on the topology of the graph. Figure 7: (a) Fraction of cooperators, $f_{{\cal C}_{i},C}$, possessing ${\cal C}_{i}=0$ and ${\cal C}_{i}=(0,1]$, (b) fraction of all nodes, $f_{{\cal C}_{i}}$, with ${\cal C}_{i}=0$ and ${\cal C}_{i}=(0,1]$, (c) average clustering coefficient of cooperator hubs, ${\langle{\cal C}\rangle}_{{\cal H}_{C}}$; versus $S$ at ${\cal T}=1.31$. $f_{{{\cal C}_{0}},C}$ peaks at $S=0.4$ akin to Fig. 3. $f_{{{\cal C}_{0}},C}$ rather than $f_{{{\cal C}_{(0,1]}},C}$ decides $f_{C}$ as seen in (b). ${\langle{\cal C}\rangle}_{{\cal H}_{C}}$ increases monotonically with $S$ in (c). Results are for $f_{C_{i}}=0.5$, ${\cal N}=1024$, $\langle k\rangle=4$, $E_{\cal N}=1000$. The standard error is smaller than the size of the data points. We also study the clustering coefficient, ${\cal C}_{i}$, of node, $i$, at different values of $S$. We denote the total number of nodes in the network possessing ${\cal C}_{i}=0$ and $0<C_{i}\leq 1$ by ${\cal N}_{{\cal C}_{0}}$ and ${\cal N}_{{\cal C}_{(0,1]}}$ respectively. The fraction of nodes in the network possessing ${\cal C}_{i}=0$ and $0<C_{i}\leq 1$ is denoted by $f_{{\cal C}_{0}}={\cal N}_{{\cal C}_{0}}/{\cal N}$ and $f_{{\cal C}_{(0,1]}}={\cal N}_{{\cal C}_{(0,1]}}/{\cal N}$ respectively. These numbers and fractions obviously include both cooperators and defectors. We now specifically denote the number of cooperators in the network possessing ${\cal C}_{i}=0$ and $0<C_{i}\leq 1$ by ${\cal N}_{{\cal C}_{0},C}$ and ${\cal N}_{{\cal C}_{(0,1],C}}$ respectively. The fraction of such nodes can then be respectively denoted by $f_{{\cal C}_{0},C}={\cal N}_{{\cal C}_{0},C}/{\cal N}$ and $f_{{\cal C}_{[0,1)},C}={\cal N}_{{\cal C}_{(0,1]},C}/{\cal N}$. Fig. 7(a) exhibits the variation of $f_{{\cal C}_{0},C}$ and $f_{{\cal C}_{(0,1]},C}$ versus $S$ at ${\cal T}=1.31$. We observe that the position of the peak for $f_{{\cal C}_{0},C}$ mirrors that of $f_{C}$ as observed in Fig. 3 earlier. Fig. 7(b) records the variation of $f_{{\cal C}_{0}}$ and $f_{{\cal C}_{(0,1]}}$ versus $S$. Clearly, $f_{{\cal C}_{0}}$ is far influential as compared to $f_{{\cal C}_{(0,1]}}$ in deciding $f_{C}$. We have represented hubs acting as cooperators by ${{\cal H}_{C}}$. ${\langle{\cal C}\rangle}_{{\cal H}_{C}}$ denotes their average clustering coefficient. Fig. 7(c) demonstrates that ${\langle{\cal C}\rangle}_{{\cal H}_{C}}$ increases monotonically with $S$. The variation of ${\langle{\cal C}\rangle}_{{\cal H}_{C}}$ with respect to $S$ in Fig. 7(c) is in remarkable contrast to the variation of ${\cal H}_{C}$ versus $S$ as observed in Fig. 5. As aforementioned, it has been reported earlier that the average clustering coefficient of a network is considered to work in favour of cooperation. However, we observe that the average clustering coefficient of hubs may not really promote cooperation. As $S$ increases, ${\langle{\cal C}\rangle}_{{\cal H}_{C}}$ increases monotonically, while the maintenance of cooperation progressively decreases. Indeed at $S=0.99$, ${\langle{\cal C}\rangle}_{{\cal H}_{C}}$ is at its highest yet the maintenance of cooperation is minimal. In order to gain a better understanding into the maintenance of cooperation, we take recourse to toy networks. In all toy networks considered herein; ${\cal R}=1$, ${\cal T}=1.01$, ${\cal P}=0$, ${\cal S}=0$ [35]. Let $i$ and $j$ be two randomly chosen neighbors in the population. Let $A$ and $B$ denote the strategy of $i$ and $j$ respectively. This strategy can be either cooperation or defection. Let $k_{i}$ denote the degree of $i$, and, $k_{j}$ of $j$. If $k_{i_{C}}$ and $k_{i_{D}}$ be the number of $C$ and $D$ in the neighborhood of $i$, then $k_{i_{C}}+k_{i_{D}}=k_{i}$. Similarly, $k_{j_{C}}+k_{j_{D}}=k_{j}$. The accumulated payoff of $i$ is ${\Pi}_{i}=\sum_{j}\pi_{ij}={k_{i_{C}}}({\pi}_{C-A})+{k_{i_{D}}}({\pi}_{D-A})$ (4) Obviously $A$ can be either $C$ or $D$. ${\pi}_{C-C}={\cal R}$ (reward), ${\pi}_{C-D}={\cal T}$ (temptation), ${\pi}_{D-C}={\cal S}$ (sucker’s payoff) and ${\pi}_{D-D}={\cal P}$ (punishment).The accumulated payoff of an arbitrarily chosen neighbor, $j$, of node, $i$, is ${\Pi}_{j}={k_{j_{C}}}({\pi}_{C-B})+{k_{j_{D}}}({\pi}_{D-B})$ (5) Individual, $i$, would upgrade to the strategy of $j$ with a probability $P(i\to j)$ as shown in Eqn. 3. Similarly, $j$ can also imitate the strategy of $i$, with probability, ${P}_{j\rightarrow i}=\frac{\Pi_{i}-\Pi_{j}}{({\cal T}-{\cal S})\times max(k_{i},k_{j})}\Theta_{\Pi_{i}\textgreater\Pi_{j}}$. The star graph in Fig. 8(a) has one hub and five leaves. Suppose the hub, $i$, is a defector and the leaves are cooperators. Let $j$ be any arbitrarily chosen neighbor of $i$. Then, $A=D$, $B=C$, $k_{i_{C}}=5$, $k_{i_{D}}=0$, $k_{j_{C}}=0$, $k_{j_{D}}=1$. The accumulated payoff of $i$ and $j$ is $\Pi_{i}=5.05$, $\Pi_{j}=0$. Since ${\Pi_{i}>\Pi_{j}}$, $i$ will not imitate the strategy of its neighbor $j$. However, $j$ will imitate the strategy of $i$ with the probability $P(j\to i)=1$. Hubs play a significant role in the maintenance of cooperation. If the hub is a cooperator, it will acquire a higher payoff and gain an evolutionary advantage over its neighbors. However, maintenance of cooperation becomes fragile if the hub is a defector. Figure 8: Blue and red denote cooperators and defectors respectively. (a) Star graph with $({\cal N},{\cal E})=(6,5)$. It can withstand the invasion of defection if the hub is not a defector. Graphs in (b) and (c) have $({\cal N},{\cal E})=(10,13)$ with $S_{b}\textgreater S_{C}$ and $r_{b}\textgreater r_{C}$. A direct link between two hubs, $i$ and $k$, makes this network vulnerable to defection. (b) Defection is likely to dominate if $i$ is a defector. $i$ can turn $k$ into a defector. (c) Cooperation is possible as $k$ will never adopt defection. From Fig. 8(a) we observe that the strategy of the hub would dominate the outcome. Cooperation would be maintained in a star graph when the hub is itself a cooperator. However, the presence of many cooperator hubs in a network is not enough in itself for maintaining cooperation. There are two hubs in both the toy networks in Fig. 8(b) and 8(c). Fig. 8(b) indicates that if hub $i$ is a defector, $k$ might adopt defection with a probability, $P(k\to i)$. From Eqn. 3, we have $P(k\to i)=0.207$. Therefore, despite the presence of the cooperator hub, defection is likely to dominate the population. Evidently, a crucial role is played by the edge between $i$ and $k$. Due to this connection, a defector hub can easily affect its neighboring cooperator hub. On the other hand, in Fig. 8 (c), we observe that if a defector hub and a cooperator hub are directly connected to each other, the defector hub would not be able to affect the cooperator hub. In summary, a significant body of study in literature states that scale-free networks can facilitate cooperation. Herein, we examine the prisoner’s dilemma game on scale-free networks. We demonstrate that identical power-law degree distributions and indeed even an identical power-law degree sequence may exhibit remarkably different outcomes with regard to cooperation.Our results indicate the maintenance of cooperation could be higher in SF networks within a narrow range of $S$ . We review the correlation of assortativity among cooperators and maintenance of cooperation. For this we borrow the notion of “cooperator graph”, $\cal G_{C}$, and “defector graph”, $\cal G_{D}$ [17]. We measure assortativity between cooperators, $r_{C}$, through the help of ${\cal G}_{C}$. We observe that the maintenance of cooperation does not always arise as a direct consequence of the assortativity between them. From the nature of variation of $f_{C}$ versus $r$, we also observe that cooperation does not bear a linear relationship with $r$. We also study the average clustering coefficient of the network at different values of $S$. It has been reported that clustering directly influences the maintenance of cooperation in a network. However, we observe that for scale- free graphs with identical degree sequence, cooperation may not really depend on clustering. In addition, we evaluate the role of hubs towards the maintenance of cooperation. In a heterogeneous population, cooperator hubs play a crucial role in accumulating higher payoffs. $\forall{\cal G}\in{\cal G}_{SF}$, cooperation does not depend merely on the number of hubs, but rather on those hubs which are cooperators. However, whether the hubs become cooperators or defectors would depend on the topology of the network. It appears that hubs are more likely to be directly connected to each other in graphs associated with higher values of $S$. If a hub becomes a defector, then other hubs are also likely to start adopting defection. Therefore, at higher values of $S$, cooperation becomes rather fragile due to the presence of direct edges between hubs. It would be beneficial to focus on clustering coefficient of cooperator hubs or even the clustering coefficient of individual nodes over the average clustering coefficient of the network. We have observed that an increase in clustering coefficient of the hubs is antagonistic to the maintenance of cooperation. Therefore, the presence of hubs is also not enough in itself to enhance the stability of cooperation. In summary, we can conclude that our existing understanding regarding cooperation on heterogeneous networks needs considerable revision. We scrutinise SF networks possessing an identical degree sequence and therefore an identical degree distribution. This leads us to observe that a power-law degree distribution may not be sufficient in itself for the maintenance of cooperation. Further, the average clustering coefficient and assortativity may not have as large an influence over maintenance of cooperation as previously thought. ## References * [1] Morgenstern O. Von Neumann J. Theory of games and economic behavior (Princeton) 1944. * [2] Dong J. Comparison between classical game theory and evolutionary game theory focused on prisoner’s dilemma in proc. of Proc. 2nd Intl. Conf. on Economic Management and Cultural Industry (Atlantis) 2020 pp. 125–128. * [3] Taylor P. D. Jonker L. B. Mathematical Biosciences401978145. * [4] Nowak M. A. Science31420061560. * [5] Szabó G. Fáth G. Physics Reports446200797. * [6] Sinha S., Ghosh S. Roy S. International Journal of Advances in Engineering Sciences and Applied Mathematics112019138. * [7] Cheney D. L. Proceedings of the National Academy of Sciences U.S.A.108201110902. * [8] Sachs J. L. Hollowell A. C. MBio32012e00099 12. * [9] McKenna M. F., Calambokidis J., Oleson E. M., Laist D. W. Goldbogen J. A. Endangered Species Research272015219. * [10] Perc M. Wang Z. PLOS One52010e15117. * [11] Nowak M. A. May R. M. Nature3591992826. * [12] Szolnoki A. Perc M. Europhysics Letters92201038003. * [13] Antonioni A., Cacault M. P., Lalive R. Tomassini M. PLOS One82013e55033. * [14] Lee S., Holme P. Wu Z.-X. Physical Review Letters1062011028702. * [15] Sinha S., Nath D. Roy S. Journal of the Indian Institute of Science (In press)1032021. * [16] Maciejewski W., Fu F. Hauert C. PLoS Computational Biology102014e1003567. * [17] Sinha S., Nath D. Roy S. The European Physical Journal B94202180. * [18] Gómez-Gardeñes J., Campillo M., Floría L. M. Moreno Y. Phys. Rev. Lett.982007108103. * [19] Szolnoki A., Perc M. Danku Z. Physica A38720082075. * [20] Wu Z.-X., Rong Z. Holme P. Physical Review E802009036106. * [21] Albert R. Barabási A.-L. Reviews of Modern Physics74200247. * [22] Newman M. E. SIAM review452003167. * [23] Sinha S., Samaddar S., Das Gupta S. K. Roy S. Bioinformatics372021213. * [24] Banerjee S. J., Azharuddin M., Sen D., Savale S., Datta H., Dasgupta A. K. Roy S. Scientific reports5201517271. * [25] Banerjee S. J., Sinha S. Roy S. Physical Review E912015022807. * [26] Kaur Grewal R., Mitra D. Roy S. Bioinformatics3120153608. * [27] Deb A., Grewal R. K., Roy S. Mitra D. Proteins: Structure, Function, and Bioinformatics8820201660. * [28] Roy S. Systems and Synthetic Biology6201231. * [29] Barabási A.-L. Albert R. Science2861999509. * [30] Broido A. D. Clauset A. Nature Communications1020191. * [31] Clauset A., Shalizi C. R. Newman M. E. SIAM Review512009661. * [32] Doyle J. C., Alderson D. L., Li L., Low S., Roughan M., Shalunov S., Tanaka R. Willinger W. Proceedings of the National Academy of Sciences U.S.A.102200514497. * [33] Tsiotas D. Proceedings of the National Academy of Sciences U.S.A.11620196701. * [34] D’souza R. M., Borgs C., Chayes J. T., Berger N. Kleinberg R. D. Proceedings of the National Academy of Sciences U.S.A.10420076112. * [35] Santos F. C. Pacheco J. M. Physical Review Letters952005098104. * [36] Santos F. C., Rodrigues J. Pacheco J. Proceedings of the Royal Society B273200651. * [37] Santos F. C., Santos M. D. Pacheco J. M. Nature4542008213. * [38] Assenza S., Gómez-Gardeñes J. Latora V. Physical Review E782008017101. * [39] Kuperman M. Risau-Gusman S. Physical Review E862012016104. * [40] Rong Z., Yang H.-X. Wang W.-X. Physical Review E822010047101. * [41] Wang J., Suri S. Watts D. J. Proceedings of the National Academy of Sciences U.S.A.109201214363. * [42] Smith K. M., Larroucau T., Mabulla I. A. Apicella C. L. Current Biology2820183152. * [43] Li L., Alderson D., Doyle J. C. Willinger W. Internet Mathematics22005431. * [44] Newman M. E. Physical Review Letters892002208701. * [45] Holme P. Nature Communications1020191. * [46] Maslov S. Sneppen K. Science2962002910. * [47] Xulvi-Brunet R. Sokolov I. M. Physical Review E702004066102. * [48] Fleming S. W. Physica A5672021125678. * [49] Tanimoto J. Physica A38920103325. * [50] Tanimoto J. Physica A39220132955. * [51] Rong Z., Li X. Wang X. Phys. Rev. E762007027101. * [52] Tanimoto J. Physica A: Statistical Mechanics and its Applications3882009953. * [53] Rui C., Yuan-Ying Q., Xiao-Jie C. Long W. Chinese Physics Letters272010030203.
8k
arxiv_papers
2101.01169
# Transformers in Vision: A Survey Salman Khan, Muzammal Naseer, Munawar Hayat, Syed Waqas Zamir, Fahad Shahbaz Khan, and Mubarak Shah S. Khan, M. Naseer and F. S. Khan are with the MBZ University of Artificial Intelligence, Abu Dhabi, UAE. E-mail: [email protected] M. Hayat is with the Faculty of IT, Monash University, Clayton VIC 3800, Australia.S. W. Zamir is with the Inception Institute of Artificial Intelligence, Abu Dhabi, UAE.S. Khan and M. Naseer are also with the CECS, Australian National University, Canberra ACT 0200, Australia. F. S. Khan is also with the Computer Vision Laboratory, Linköping University, Sweden. M. Shah is with the Center for Research in Computer Vision, University of Central Florida, Orlando, FL 32816, United States. Manuscript received March, 2021. ###### Abstract Astounding results from Transformer models on natural language tasks have intrigued the vision community to study their application to computer vision problems. Among their salient benefits, Transformers enable modeling long dependencies between input sequence elements and support parallel processing of sequence as compared to recurrent networks _e.g._ , Long short-term memory (LSTM). Different from convolutional networks, Transformers require minimal inductive biases for their design and are naturally suited as set-functions. Furthermore, the straightforward design of Transformers allows processing multiple modalities (_e.g._ , images, videos, text and speech) using similar processing blocks and demonstrates excellent scalability to very large capacity networks and huge datasets. These strengths have led to exciting progress on a number of vision tasks using Transformer networks. This survey aims to provide a comprehensive overview of the Transformer models in the computer vision discipline. We start with an introduction to fundamental concepts behind the success of Transformers i.e., self-attention, large-scale pre-training, and bidirectional feature encoding. We then cover extensive applications of transformers in vision including popular recognition tasks (_e.g._ , image classification, object detection, action recognition, and segmentation), generative modeling, multi-modal tasks (_e.g._ , visual- question answering, visual reasoning, and visual grounding), video processing (_e.g._ , activity recognition, video forecasting), low-level vision (_e.g._ , image super-resolution, image enhancement, and colorization) and 3D analysis (_e.g._ , point cloud classification and segmentation). We compare the respective advantages and limitations of popular techniques both in terms of architectural design and their experimental value. Finally, we provide an analysis on open research directions and possible future works. We hope this effort will ignite further interest in the community to solve current challenges towards the application of transformer models in computer vision. ###### Index Terms: Self-attention, transformers, bidirectional encoders, deep neural networks, convolutional networks, self-supervision. ## 1 Introduction Transformer models [1] have recently demonstrated exemplary performance on a broad range of language tasks _e.g._ , text classification, machine translation [2] and question answering. Among these models, the most popular ones include BERT (Bidirectional Encoder Representations from Transformers) [3], GPT (Generative Pre-trained Transformer) v1-3 [4, 5, 6], RoBERTa (Robustly Optimized BERT Pre-training) [7] and T5 (Text-to-Text Transfer Transformer) [8]. The profound impact of Transformer models has become more clear with their scalability to very large capacity models [9, 10]. For example, the BERT-large [3] model with 340 million parameters was significantly outperformed by the GPT-3 [6] model with 175 billion parameters while the latest mixture-of-experts Switch transformer [10] scales up to a whopping 1.6 trillion parameters! Figure 1: Statistics on the number of times keywords such as BERT, Self- Attention, and Transformers appear in the titles of Peer-reviewed and arXiv papers over the past few years (in Computer Vision and Machine Learning). The plots show consistent growth in recent literature. This survey covers recent progress on Transformers in the computer vision domain. The breakthroughs from Transformer networks in Natural Language Processing (NLP) domain has sparked great interest in the computer vision community to adapt these models for vision and multi-modal learning tasks (Fig. 1). However, visual data follows a typical structure (e.g., spatial and temporal coherence), thus demanding novel network designs and training schemes. As a result, Transformer models and their variants have been successfully used for image recognition [11, 12], object detection [13, 14], segmentation [15], image super-resolution [16], video understanding [17, 18], image generation [19], text-image synthesis [20] and visual question answering [21, 22], among several other use cases [23, 24, 25, 26]. This survey aims to cover such recent and exciting efforts in the computer vision domain, providing a comprehensive reference to interested readers. Transformer architectures are based on a self-attention mechanism that learns the relationships between elements of a sequence. As opposed to recurrent networks that process sequence elements recursively and can only attend to short-term context, Transformers can attend to complete sequences thereby learning long-range relationships. Although attention models have been extensively used in both feed-forward and recurrent networks [27, 28], Transformers are based solely on the attention mechanism and have a unique implementation (i.e., multi-head attention) optimized for parallelization. An important feature of these models is their scalability to high-complexity models and large-scale datasets e.g., in comparison to some of the other alternatives such as hard attention [29] which is stochastic in nature and requires Monte Carlo sampling for sampling attention locations. Since Transformers assume minimal prior knowledge about the structure of the problem as compared to their convolutional and recurrent counterparts [30, 31, 32], they are typically pre-trained using pretext tasks on large-scale (unlabelled) datasets [1, 3]. Such a pre-training avoids costly manual annotations, thereby encoding highly expressive and generalizable representations that model rich relationships between the entities present in a given dataset. The learned representations are then fine-tuned on the downstream tasks in a supervised manner to obtain favorable results. This paper provides a holistic overview of the transformer models developed for computer vision applications. We develop a taxonomy of the network design space and highlight the major strengths and shortcomings of the existing methods. Other literature reviews mainly focus on the NLP domain [33, 34] or cover generic attention-based approaches [33, 27]. By focusing on the newly emerging area of visual transformers, we comprehensively organize the recent approaches according to the intrinsic features of self-attention and the investigated task. We first provide an introduction to the salient concepts underlying Transformer networks and then elaborate on the specifics of recent vision transformers. Where ever possible, we draw parallels between the Transformers used in the NLP domain [1] and the ones developed for vision problems to flash major novelties and interesting domain-specific insights. Recent approaches show that convolution operations can be fully replaced with attention-based transformer modules and have also been used jointly in a single design to encourage symbiosis between the two complementary set of operations. This survey finally details open research questions with an outlook towards the possible future work. ## 2 Foundations There exist two key ideas that have contributed towards the development of conventional transformer models. (a) The first one is _self-attention_ , which allows capturing ‘long-term’ dependencies between sequence elements as compared to conventional recurrent models that find it challenging to encode such relationships. (b) The second key idea is that of _pre-training_ 111Several recent Vision Transformers demonstrate that the model can be learned end-to-end on ImageNet-1K without any dedicated pre-training phase [35, 36, 37]. However, the performance generally remains lower than the pre- trained counter-parts. on a large (un)labelled corpus in a (self)supervised manner, and subsequently fine-tuning to the target task with a small labeled dataset [3, 7, 38]. Below, we provide a brief tutorial on these two ideas (Sec. 2.2 and 2.1), along with a summary of seminal Transformer networks (Sec. 2.3 and 2.4) where these ideas have been applied. This background will help us better understand the forthcoming Transformer based models used in the computer vision domain (Sec. 3). Figure 2: An example self-attention block used in the vision domain [39]. Given the input sequence of image features, the triplet of (key, query, value) is calculated followed by attention calculation and applying it to reweight the values. A single head is shown here and an output projection (W) is finally applied to obtain output features with the same dimension as the input. Figure adapted from [39]. Figure 3: _Architecture of the Transformer Model_ [1]. The model was first developed for the language translation task where an input sequence in one language is required to be converted to the output sequence in another language. The Transformer encoder (_middle_ row) operates on the input language sequence and converts it to an embedding before passing it on to the encoder blocks. The Transformer decoder (_bottom_ row) operates on the previously generated outputs in the translated language and the encoded input sequence from the middle branch to output the next word in the output sequence. The sequence of previous outputs (used as input to the decoder) is obtained by shifting the output sentence to the right by one position and appending start-of-sentence token at the beginning. This shifting avoids the model to learn to simply copy the decoder input to the output. The ground-truth to train the model is simply the output language sequence (without any right shift) appended with an end-of-sentence token. The blocks consisting of multi-head attention (_top_ row) and feed-forward layers are repeated $N$ times in both the encoder and decoder. ### 2.1 Self-Attention in Transformers Given a sequence of items, self-attention estimates the relevance of one item to other items (e.g., which words are likely to come together in a sentence). The self-attention mechanism is an integral component of Transformers, which explicitly models the interactions between all entities of a sequence for structured prediction tasks. Basically, a self-attention layer updates each component of a sequence by aggregating global information from the complete input sequence. Lets denote a sequence of $n$ entities ($\mathbf{x}_{1},\mathbf{x}_{2},\cdots\mathbf{x}_{n}$) by $\mathbf{X}\in\mathbb{R}^{n\times d}$, where $d$ is the embedding dimension to represent each entity. The goal of self-attention is to capture the interaction amongst all $n$ entities by encoding each entity in terms of the global contextual information. This is done by defining three learnable weight matrices to transform Queries ($\mathbf{W}^{Q}\in\mathbb{R}^{d\times d_{q}}$), Keys ($\mathbf{W}^{K}\in\mathbb{R}^{d\times d_{k}}$) and Values ($\mathbf{W}^{V}\in\mathbb{R}^{d\times d_{v}}$), where $d_{q}=d_{k}$. The input sequence $\mathbf{X}$ is first projected onto these weight matrices to get $\mathbf{Q}=\mathbf{X}\mathbf{W}^{Q}$, $\mathbf{K}=\mathbf{X}\mathbf{W}^{K}$ and $\mathbf{V}=\mathbf{X}\mathbf{W}^{V}$. The output $\mathbf{Z}\in\mathbb{R}^{n\times d_{v}}$ of the self attention layer is, $\mathbf{Z}=\mathbf{softmax}\left(\frac{\mathbf{Q}\mathbf{K}^{T}}{\sqrt{d_{q}}}\right)\mathbf{V}.$ For a given entity in the sequence, the self-attention basically computes the dot-product of the query with all keys, which is then normalized using softmax operator to get the attention scores. Each entity then becomes the weighted sum of all entities in the sequence, where weights are given by the attention scores (Fig. 2 and Fig. 3, top row-left block). Masked Self-Attention: The standard self-attention layer attends to all entities. For the Transformer model [1] which is trained to predict the next entity of the sequence, the self-attention blocks used in the decoder are masked to prevent attending to the subsequent future entities. This is simply done by an element-wise multiplication operation with a mask $\mathbf{M}\in\mathbb{R}^{n\times n}$, where $\mathbf{M}$ is an upper- triangular matrix. The masked self-attention is defined by, $\mathbf{softmax}\left(\frac{\mathbf{Q}\mathbf{K}^{T}}{\sqrt{d_{q}}}\circ\mathbf{M}\right),$ where $\circ$ denotes Hadamard product. Basically, while predicting an entity in the sequence, the attention scores of the future entities are set to zero in masked self-attention. Multi-Head Attention: In order to encapsulate multiple complex relationships amongst different elements in the sequence, the multi-head attention comprises multiple self-attention blocks ($h=8$ in the original Transformer model [1]). Each block has its own set of learnable weight matrices $\\{\mathbf{W}^{Q_{i}},\mathbf{W}^{K_{i}},\mathbf{W}^{V_{i}}\\}$, where $i=0\cdots(h{-}1)$. For an input $\mathbf{X}$, the output of the $h$ self- attention blocks in multi-head attention is then concatenated into a single matrix $[\mathbf{Z}_{0},\mathbf{Z}_{1},\cdots\mathbf{Z}_{h-1}]\in\mathbb{R}^{n\times h\cdot d_{v}}$ and projected onto a weight matrix $\mathbf{W}\in\mathbb{R}^{h\cdot d_{v}\times d}$ (Fig. 3, top row). The main difference of self-attention with convolution operation is that the filters are dynamically calculated instead of static filters (that stay the same for any input) as in the case of convolution. Further, self-attention is invariant to permutations and changes in the number of input points. As a result, it can easily operate on irregular inputs as opposed to standard convolution that requires grid structure. Furthermore, it has been shown in the literature how self-attention (with positional encodings) is theoretically a more flexible operation which can model the behaviour of convolutional models towards encoding local features [40]. Cordonnier et al.[41] further studied the relationships between self-attention and convolution operations. Their empirical results confirm that multi-head self-attention (with sufficient parameters) is a more generic operation which can model the expressiveness of convolution as a special case. In fact, self-attention provides the capability to learn the global as well as local features, and provide expressivity to adaptively learn kernel weights as well as the receptive field (similar to deformable convolutions [42]). ### 2.2 (Self) Supervised Pre-training Self-attention based Transformer models generally operate in a two-stage training mechanism. First, pre-training is performed on a large-scale dataset (and sometimes a combination of several available datasets [22, 43]) in either a supervised [11] or a self-supervised manner [3, 44, 45]. Later, the pre- trained weights are adapted to the down-stream tasks using small-mid scale datasets. Examples of downstream tasks include image classification [46], object detection [13], zero-shot classification [20], question-answering [10] and action recognition [18]. The effectiveness of pre-training for large-scale Transformers has been advocated in both the language and vision domains. For example, Vision Transformer model (ViT-L) [11] experiences an absolute $13\%$ drop in accuracy on ImageNet test set when trained only on ImageNet train set as compared to the case when pretrained on JFT dataset [47] with 300 million images. Since acquiring manual labels at a massive scale is cumbersome, self- supervised learning has been very effectively used in the pre-training stage. The self-supervision based pre-training stage training has played a crucial role in unleashing the scalability and generalization of Transformer networks, enabling training even above a _trillion_ parameter networks (e.g., the latest Switch Transformer [10] from Google). An extensive survey on SSL can be found in [48, 49]. As nicely summarized by Y. LeCun [50], the basic idea of SSL is to _fill in the blanks_ , i.e., try to predict the occluded data in images, future or past frames in temporal video sequences or predict a pretext task _e.g._ , the amount of rotation applied to inputs, the permutation applied to image patches or the color of a gray-scale image. Another effective way to impose self-supervised constraints is via contrastive learning. In this case, nuisance transformations are used to create two types of modified versions of the same image i.e., without changing the underlying class semantics (_e.g._ , image stylizing, cropping) and with semantic changes (_e.g._ , replacing an object with another in the same scene, or changing the class with minor adversarial changes to the image). Subsequently, the model is trained to be invariant to the nuisance transformations and emphasize on modeling minor changes that can alter semantic labels. Self-supervised learning provides a promising learning paradigm since it enables learning from a vast amount of readily available non-annotated data. In the SSL based pre-training stage, a model is trained to learn a meaningful representation of the underlying data by solving a pretext task. The pseudo- labels for the pretext task are automatically generated (without requiring any expensive manual annotations) based on data attributes and task definition. Therefore, the pretext task definition is a critical choice in SSL. We can broadly categorize existing SSL methods based upon their pretext tasks into (a) generative approaches which synthesize images or videos (given conditional inputs), (b) context-based methods which exploit the relationships between image patches or video frames, and (c) cross-modal methods which leverage from multiple data modalities. Examples of generative approaches include conditional generation tasks such as masked image modeling [43] and image colorization [51], image super-resolution [52], image in-painting [53], and GANs based methods [54, 55]. The context-based pretext methods solve problems such as a jigsaw puzzle on image patches [56, 57, 58], masked object classification [22], predict geometric transformation such as rotation [46, 59], or verify temporal sequence of video frames [60, 61, 62]. Cross-modal pretext methods verify the correspondence of two input modalities _e.g._ , text & image [63], audio & video [64, 65] or RGB & flow [66]. ### 2.3 Transformer Model The architecture of the Transformer model proposed in [1] is shown in Fig. 3. It has an encoder-decoder structure. The encoder (_middle_ row) consists of six identical blocks (i.e., $N{=}6$ in Fig. 3), with each block having two sub-layers: a multi-head self-attention network, and a simple position-wise fully connected feed-forward network. Residual connections [67] alongside layer normalization [68] are employed after each block as in Fig. 3. Note that, different from regular convolutional networks where feature aggregation and feature transformation are simultaneously performed (_e.g._ , with a convolution layer followed by a non-linearity), these two steps are decoupled in the Transformer model i.e., self-attention layer only performs aggregation while the feed-forward layer performs transformation. Similar to the encoder, the decoder (_bottom_ row) in the Transformer model comprises six identical blocks. Each decoder block has three sub-layers, first two (multi-head self- attention, and feed-forward) are similar to the encoder, while the third sub- layer performs multi-head attention on the outputs of the corresponding encoder block, as shown in Fig. 3. The original Transformer model in [1] was trained for the Machine Translation task. The input to the encoder is a sequence of words (sentence) in one language. Positional encodings are added to the input sequence to capture the relative position of each word in the sequence. Positional encodings have the same dimensions as the input $d=512$, and can be learned or pre-defined _e.g._ , by sine or cosine functions. Being an auto-regressive model, the decoder of the Transformer [1] uses previous predictions to output the next word in the sequence. The decoder, therefore, takes inputs from the encoder as well as the previous outputs to predict the next word of the sentence in the translated language. To facilitate residual connections the output dimensions of all layers are kept the same i.e., $d=512$. The dimensions of query, key and value weight matrices in multi-head attention are set to $d_{q}=64,d_{k}=64,d_{v}=64$. Figure 4: _A taxonomy of self-attention design space_. Existing approaches based on self-attention explore single-head or multi-head (transformer) designs for vision tasks. We note that interesting efforts have been made to utilize knowledge from convolution based architectures to improve ViTs (e.g., multi-scale and hybrid designs). We categorize the upcoming sections of this survey according to the types of self-attention block (_left tree diagram_) as well as the prominent tasks in computer vision (_right_). ### 2.4 Bidirectional Representations The training strategy of the original Transformer model [1] could only attend to the context on the left of a given word in the sentence. This is limiting, since for most language tasks, contextual information from both left and right sides is important. Bidirectional Encoder Representations from Transformers (BERT) [3] proposed to jointly encode the right and left context of a word in a sentence, thus improving the learned feature representations for textual data in an self-supervised manner. To this end, BERT [3] introduced two pretext tasks to pre-train the Transformer model [1] in a self-supervised manner: Masked Language Model and Next Sentence Prediction. For adapting the pre-trained model for downstream tasks, a task-specific additional output module is appended to the pre-trained model, and the full model is fine-tuned end-to-end. Here, we briefly touch upon the pretext tasks. (1) Masked Language Model (MLM) - A fixed percentage (15%) of words in a sentence are randomly masked and the model is trained to predict these masked words using cross- entropy loss. In predicting the masked words, the model learns to incorporate the bidirectional context. (2) Next Sentence Prediction (NSP) - Given a pair of sentences, the model predicts a binary label i.e., whether the pair is valid from the original document or not. The training data for this can easily be generated from any monolingual text corpus. A pair of sentences A and B is formed, such that B is the actual sentence (next to A) 50% of the time, and B is a random sentence for other 50% of the time. NSP enables the model to capture sentence-to-sentence relationships which are crucial in many language modeling tasks such as Question Answering and Natural Language Inference. ## 3 Self-Attention & Transformers in Vision We broadly categorize vision models with self-attention into two categories: the models which use single-head self-attention (Sec. 3.1), and the models which employ multi-head self-attention based Transformer modules into their architectures (Sec. 3.2). Below, we first discuss the first category of single-head self-attention based frameworks, which generally apply global or local self-attention within CNN architectures, or utilize matrix factorization to enhance design efficiency and use vectorized attention models. We then discuss the Transformer-based vision architectures in Sec. 3.2. ### 3.1 Single-head Self-Attention #### 3.1.1 Self-Attention in CNNs Inspired by non-local means operation [69] which was mainly designed for image denoising, Wang et al.[70] proposed a differentiable non-local operation for deep neural networks to capture long-range dependencies both in space and time in a feed-forward fashion. Given a feature map, their proposed operator [70] computes the response at a position as a weighted sum of the features at all positions in the feature map. This way, the non-local operation is able to capture interactions between any two positions in the feature map regardless of the distance between them. Videos classification is an example of a task where long-range interactions between pixels exist both in space and time. Equipped with the capability to model long-range interactions, [70] demonstrated the superiority of non-local deep neural networks for more accurate video classification on Kinetics dataset [71]. Although the self-attention allows us to model full-image contextual information, it is both memory and compute intensive. As shown in Fig. 5(a), in order to encode global context for a given pixel location, non-local block [70] computes a _dense_ attention map (in green). The non-local block [70] has a high complexity of $\mathcal{O}(N^{2})$, where $N$ denotes the number of input feature maps. To reduce this computational burden, Huang et al.[72] propose the criss-cross attention module that for each pixel position generates a _sparse_ attention map only on the criss-cross path, as illustrated in Fig. 5(b). Further, by applying criss-cross attention recurrently, each pixel position can capture context from all other pixels. Compared to non-local block, the criss-cross uses 11$\times$ lesser GPU memory, and has a complexity of $\mathcal{O}(2\sqrt{N})$. State-of-the-art results are reported [72] for the semantic and instance segmentation tasks on several benchmark datasets including Cityscapes [73], ADE20K [74], COCO [75], LIP [76] and CamVid [77]. (a) Non-local block [70] (b) Criss-cross attention [72] Figure 5: Comparison of two different self-attention approaches: Non-local self-attention block [70] and Criss-cross self-attention module [72]. Figure is from [72]. Another shortcoming of the convolutional operator comes from the fact that after training, it applies fixed weights regardless of any changes to the visual input. Hu et al.[78] proposed local relation networks to adaptively compose pixels in a local window. They introduced a new differentiable layer that adapts its weight aggregation based on the compositional relations (similarity) between pixels/features within a local window. Such adaptive weight aggregation introduces geometric priors into the network which are important for the recognition tasks [78]. Convolution is considered to be a top-down operator as it remains fixed across positions while a non-local operation such as introduced in [69] is a bottom-up method as it aggregates input features over the full image. The local relation layer belongs to the category of bottom-up methods but it is restricted to a fixed window size _e.g._ , 7x7 neighborhood. Bello et al. [79] explore the possibility of employing self-attention as an alternative to convolutional operators. They employ the relative position encoding [80] in two dimensions to develop a new self-attention mechanism that maintains translation equivariance, a desirable property for handling images. Although this self-attention provides competitive results as a stand-alone computational primitive, the best performance is obtained in combination with the convolutional operations. Authors show that attention augmentation leads to systematic performance gains in image classification and object detection for different architectures. #### 3.1.2 Self-Attention as Stand-alone Primitive As discussed above, convolutional layers possess translation equivariance but can not scale with a large receptive field, therefore can not capture long- range interactions [81]. On the other hand, global attention [1] which attend to all spatial locations of the input can be computationally intensive and is preferred on down-sampled small images, image patches [11] or augmenting the convolutional features space [79]. Ramachandran et al.[81] proposed to replace convolutional layers in deep neural networks with a local self-attention layer which can be applied to small or large inputs without increasing the computational cost. At a basic level, the proposed self-attention layer [81] considers all pixel positions in a specific window size around a given pixel, compute queries, keys and value vectors for these pixels, and then aggregates the spatial information within this window. The value vectors are aggregated after projecting the softmax score of queries and keys. This process is repeated for all given pixels and the response is concatenated to produce the output pixel. ResNet models with local self-attention layer can solve ImageNet and COCO object detection with fewer parameters as compared to ResNet models based on convolutional layers [81]. Zhao et al.[82] note that a traditional convolution operator performs feature aggregation and transformation jointly (by applying a filter and then passing it through a non-linearity). In contrast, they propose to perform feature aggregation separately with self-attention followed by transformation using an element-wise perceptron layer. For feature aggregation, they propose two alternate strategies: (a) pairwise self-attention and (b) patch-wise self- attention. The pairwise self-attention is permutation and cardinality invariant operation, while the patch-wise self-attention does not have such invariance properties (similar to convolution). Both pairwise and patch-wise self-attentions are implemented as a _vector_ attention [82] that learns weights for both the spatial and channel dimensions. This provides an alternate approach for attention that is conventionally performed using scalar weights (by taking a dot-product). The pairwise self-attention is a set operator that computes a _vector attention_ keeping in view the relationships of a particular feature with its neighbors in a given local neighborhood. In contrast, patch-wise self-attention is a generalization of the convolution operator (not a set operator) and looks at all the feature vectors in the local neighbourhood when deriving the attention vectors. Authors show that with considerably fewer parameters, self-attention networks (SAN) can beat ResNet baselines on the ImageNet dataset. They further show robustness against adversarial perturbations [83, 84] and generalization to unseen transformations [85]. This behaviour is due to the dynamic nature of attention that makes it difficult for the adversary to calculate useful fooling directions. ### 3.2 Multi-head Self-Attention (Transformers) Unlike the approaches discussed in Sec. 3.1 which insert self-attention as a component in CNN inspired architectures, Vision Transformer (ViTs) [11] adapts the architecture of [1] (see Fig. 3), which cascades multiple Transformer layers. ViTs have gained significant research attention, and a number of recent approaches have been proposed which build upon ViTs. Below, we discuss these methods by categorizing them into: uniform scale ViTs having single- scale features through all layers (Sec. 3.2.1), multi-scale ViTs that learn hierarchical features which are more suitable for dense prediction tasks (Sec. 3.2.2), and hybrid designs having convolution operations within ViTs (Sec. 3.2.3). #### 3.2.1 Uniform-scale Vision Transformers The original Vision Transformer [11] model belongs to this family, where the multi-head self-attention is applied to a consistent scale in the input image where the spatial scale is maintained through the network hierarchy. We name such models as the uniform-scale ViTs, as described below. Vision Transformer (ViT) [11] (Fig. 6) is the first work to showcase how Transformers can ‘altogether’ replace standard convolutions in deep neural networks on large-scale image datasets. They applied the original Transformer model [1] (with minimal changes) on a sequence of image ’patches’ flattend as vectors. The model was pre-trained on a large propriety dataset (JFT dataset [47] with 300 million images) and then fine-tuned to downstream recognition benchmarks _e.g._ , ImageNet classification. This is an important step since pre-training ViT on a medium-range dataset would not give competitive results, because the CNNs encode prior knowledge about the images (inductive biases _e.g._ , translation equivariance) that reduces the need of data as compared to Transformers which must discover such information from very large-scale data. Notably, compared to the iGPT [19] model that also applied Transformers to full-sized images but performs training as a generative task, ViT pre- trains the model with a supervised classification task (although a self- supervision variant is also explored which results in a less performance). Figure 6: An overview of Vision Transformer (on the _left_) and the details of Transformer encoder (on the _right_). The architecture resembles Transformers used in the NLP domain and the image patches are simply fed to the model after flattening. After training, the feature obtained from the first token position is used for classification. Image obtained from [11]. The DeiT [12] is the first work to demonstrate that Transformers can be learned on mid-sized datasets (i.e., 1.2 million ImageNet examples compared to 300 million images of JFT [11] used in ViT [11]) in relatively shorter training episodes. Besides using augmentation and regularization procedures common in CNNs, the main contribution of DeiT [12] is a novel native distillation approach for Transformers which uses a CNN as a teacher model (RegNetY-16GF [86]) to train the Transformer model. The outputs from the CNN aid the Transformer in efficiently figuring out useful representations for input images. A distillation token is appended with the input patch embeddings and the class token. The self-attention layers operate on these tokens to learn their inter-dependencies and outputs the learned class, patch, and distillation tokens. The network is trained with a cross-entropy loss defined on the output class token and a distillation loss to match the distillation token with the teacher output. Both _soft_ and _hard_ label choices were explored for distillation, where the hard distillation was found to perform better. Interestingly, the learned class and distillation tokens do not exhibit a high correlation indicating their complementary nature. The learned representations compare favorably well against top-performing CNN architectures such as EfficientNet [87] and also generalize well for a number of downstream recognition tasks. Token to Token (T2T) ViT [35] recursively combines neighboring tokens into a single token to reduce tokens length and aggregate spatial context. Transformer in Transformer [88] computes attention at two levels: patch-level (as done is standard ViTs [11]) and local sub-patch-level (_e.g._ by subdividing a $16\times 16$ patch into four $4\times 4$ blocks, and computing attention amongst these blocks). In token labelling ViT [89], all patch tokens contribute towards loss calculation, different from regular ViTs that only use classification token in the loss. This process includes auxiliary supervision where each image-patch (token) is labeled using a pre-trained CNN model. Similar to CutMix augmentation [90], tokens from different images are mixed as an augmentation strategy, and the model is trained using the standard classification loss and auxiliary token-label loss. Their model demonstrates excellent performance specially for smaller sized models. The quadratic complexity of self-attention hinders its applicability to longer sequences (high-resolution images). Cross-Covariance Image Transformers (XCiT) [91] incorporate attention across feature-channels instead of tokens, i.e., their cross-covariance attention is given by $\mathbf{V}\mathbf{softmax}\left(\frac{\mathbf{K}^{T}\mathbf{Q}^{T}}{\sqrt{\tau}}\right)$. The proposed cross-covariance attention has linear complexity (since it depends upon feature dimension instead of the number of tokens). XCiT can therefore handle large resolution images and demonstrate excellent performance across different vision tasks i.e., self-supervised and fully supervised image classification and dense prediction (detection, segmentation). DeepViT [92] observes that the similarity between attention maps of deeper layer is high and hinders scaling models depth. They propose to re-attend the attention maps in a multi-head block instead of simple aggregation of these attention maps, and show consistent gains over standard multi-head self attention based ViTs. #### 3.2.2 Multi-scale Vision Transformers In standard ViTs, the number of the tokens and token feature dimension are kept fixed throughout different blocks of the network. This is limiting, since the model is unable to capture fine spatial details at different scales. Initial Transformer based dense prediction methods (e.g., DETR [13]) therefore have a convolutional backend. Multi-stage hierarchical design for ViTs, where number of tokens is gradually reduced while the token feature dimension is progressively increased, has been shown to produce effective features for dense prediction tasks [93, 94, 36, 95, 96]. These models generally also perform well for recognition tasks. These architectures mostly sparsify tokens by merging neighboring tokens and projecting them to a higher dimensional feature space. Examples of multi-stage ViTs include Pyramid ViT [93, 97], Twins [37], CoaT [98], Swin Transformer [36], Convolutional vision Transformer (CvT) [96], Shuffle Transformer [95], CrossFormer [99], RegionViT [100] and Focal Transformer models [94]. Some of them are hybrid designs (with both convolution and self-attention operations, see Sec. 3.2.3), while others only employ pure self-attention based design (discussed next). Pyramid ViT (PVT) [93] is the first hierarchical design for ViT, and proposes a progressive shrinking pyramid and spatial-reduction attention. PVTv2 [97] and SegFormer [101] improve original PVT [93] by introducing overlapping patch embedding, depth-wise convolution, and efficient attention. Swin Transformer [36] has a multi-stage hierarchical architecture which computes attention within a local window, by partitioning the window into multiple sub-patches. To capture interactions between different windows (image locations), window partitioning is gradually shifted, along the hierarchy of the network, to capture overlapping regions. Focal Transformer models [94] is another hierarchical design, where focal self-attention is introduced to simultaneously capture global and local relationships. Similarly, CrossFormer [99] has a hierarchical pyramid structure, and introduces cross-scale embedding module, along-with long short distance attention and dynamic position bias to faithfully capture both local and global visual cues. RegionViT [100] proposes a regional-to-local attention to encode hierarchical features. Multi-Scale Vision Longformer [102] also considers a local context in self-attention, but employs the efficient Longformer [103] design for self- attention. CrossViT [104] encodes multi-scale features with two branches (each with multiple transformer blocks), by separately processesing smaller and larger image patches. The information from these two multi-scale bracnches is then fused together using a cross-attention module. #### 3.2.3 Hybrid ViTs with Convolutions Convolutions do an excellent job at capturing low-level local features in images, and have been explored in multiple hybrid ViT designs, specially at the beginning to “patchify and tokenize” an input image. For example, Convolutional vision Transformer (CvT) [96] incorporate convolution based projection to capture the spatial structure and low-level details, for tokenization of image patches. CvT has a hierarchical design, where number of tokens is progressively reduced while the token-width is increased, thus imitating the impact of spatial downsampling as in CNNs. Convolution enhanced image Transformers [105] employ convolutions based image-to-token module to extract low-level features. Compact Convolutional Transformer (CCT) [106] introduces a new sequence pooling scheme, and incorporates convolutional blocks (conv-pool-reshape) for tokenization. CCT can be trained from scratch on smaller datasets, _e.g._ , CIFAR10 with $\sim 95\%$ accuracy, which is a remarkable property not possible with the traditional ViTs. LocalViT [107] introduces depthwise convolutions to enhance local features modeling capability of ViTs. LeViT [108] (name inspired from LeNet [109]) applies a four-layered CNN block (with $3\times 3$ convolutions) at the beginning with progressively increasing channels (3,32,64,128,256). For a $3\times 224\times 224$ input image, the resulting $256\times 14\times 14$ output from the CNN block becomes input to a hierarchical ViT. By virtue of its design, LeViT is $5\times$ faster than EfficientNet [87] on CPU, at inference. ResT [110] is another hierarchical architecture which applies a CNN block at the beginning for patch-embedding. It incorporates depth-wise convolutions and adaptive position encoding to tackle varying image sizes. A recent approach NesT [111] proposes a simple technique to introduce hierarchy in ViTs. NesT divides an image into non-overlapping blocks (each block is further split into patches). It first separately applies local self-attention on patches within each block, and then enables global interaction between blocks by aggregating them into an image space and applying convolution operation, followed by downsampling. The number of blocks is gradually reduced along the hierarchy of the model, while number of local-patches is kept fixed. This simple scheme performs favorably compared with more sophisticated designs [97, 36], and enables training NesT on smaller datasets (e.g., CIFAR-10) from scratch. Depthwise Convolution and self-Attention Networks (CoAtNets) [112] introduce a relative attention module (which combines depthwise convolutions and self- attention), and vertically stack convolution and attention layers. CoAtNets demonstrate an impressive $86\%$ ImageNet top-1 accuracy without extra data (i.e. trained only on ImageNet-1k). Shuffle Transformer [95] performs self- attention within a window and has depth-wise convolutions between the window- based multi-head self-attention and MLP. It introduces a shuffle operation to build stronger cross-patch connections. Co-scale conv-attentional image Transformers (CoaT) [98], is a hybrid hierarchical pyramid design, with serial and parallel blocks, where the serial block is similar to standard transformer block except for the attention layer replaced with depthwise convolution. The parallel blocks is applied on the output of serial blocks and encodes relationships between tokens at multiple scales using cross-attention. Twins [37] builds upon PVT [93] (an attention only pyramid design), by replacing the absolute position embedding in PVT with relative conditional position embedding [113], and incorporating the separable depth-wise convolutions instead of the standard spatial attention, to capture local and global context of the image. In this sense, the hybrid designs tend to combine the strengths of both convolution and transformer models. TransCNN [114] propose a hierarchical multi-head self attention block, which first learns interactions within small grids (tokens) using self-attention, and then gradually merges the smaller grids into larger grids. The proposed block can then be plugged into existing CNN architectures. #### 3.2.4 Self-Supervised Vision Transformers Contrastive learning based self-supervised approaches, which have gained significant success for CNN based vision tasks, have also been investigated for ViTs. Chen et al.[115] evaluate different self-supervised frameworks and propose practical strategies including MoCo v3 (extended from v1/v2 [116, 117]) for stabilized training of self-supervised ViTs. Xie et al.[118] combine MoCo v2 [117] and BYOL [119] to train DeiT [12] and SwinTransformer [36]. They demonstrate generalization of self-supervised SwinTransformer for dense prediction tasks of detection and segmentation. Self distillation with no labels (DINO) [120] demonstrate that self-supervised ViTs can automatically segment the background pixels of an image, even though they were never trained using pixel-level supervision, a phenomena otherwise not observed in CNNs or fully supervised ViTs. Efficient self-supervised vision transformer (EsViT) [121] propose a multi-stage design, where neighboring tokens are gradually merged along the hierarchy of the network, and use DINO for self-supervision. Apart from standard image-level self-supervision as in DINO, they incorporate additional patch-level self-supervision in which correspondence is promoted between similar patches within augmented versions of an image. EsViT demonstrates excellent performance under self-supervision settings, and its off-the-shelf features transfer better than supervised SwinTransformer on 17 out of 18 evaluated datasets. ### 3.3 Transformers for Object Detection Transformers based modules have been used for object detection in the following manner: (a) Transformer backbones for feature extraction, with a R-CNN based head for detection (see Sec. 3.2.2), (b) CNN backbone for visual features and a Transformer based decoder for object detection [13, 14, 122, 123] (see Sec. 3.3.1, and (c) a purely transformer based design for end-to-end object detection [124] (see Sec. 3.3.2). #### 3.3.1 Detection Transformers with CNN Backbone Detection Transformer (DETR) [13] treats object detection as a set prediction task i.e., given a set of image features, the objective is to predict the set of object bounding boxes. The Transformer model enables the prediction of a set of objects (in a single shot) and also allows modeling their relationships. DETR adapts a set loss function which allows bipartite matching between predictions and ground-truth boxes. The main advantage of DETR is that it removes the dependence on hand-crafted modules and operations, such as the RPN (region proposal network) and NMS (non-maximal suppression) commonly used in object detection [125, 126, 127, 128, 129]. In this manner, the dependence on prior knowledge and careful engineering design is relaxed for complex structured tasks like object detection. Figure 7: Detection Transformer (DETR) [13] treats the object detection task as a set prediction problem and uses the Transformer network to encode relationships between set elements. A bipartite set loss is used to uniquely match the box predictions with the ground-truth boxes (shown on the _right_ two columns). In case of no match, a ’ _no object_ ’ class prediction is selected. Its simple design with minimal problem-specific modifications can beat a carefully built and popular Faster R-CNN model. Figure from [13]. Given spatial feature maps from the CNN backbone, the encoder first flattens the spatial dimensions (see Fig. 7). This gives a sequence of features $d\times n$, where $d$ is the feature dimension and $n=h\times w$ with $h,w$ being the height and width of the spatial feature maps. These features are then encoded and decoded using multi-head self-attention modules as in [1]. The main difference in the decoding stage is that all boxes are predicted in parallel while [1] uses an RNN to predict sequence elements one by one. Since the encoder and decoder are permutation invariant, learned positional encodings are used as the object queries by the decoder to generate different boxes. Note that the spatial structure in a CNN detector (e.g., Faster R-CNN) automatically encodes the positional information. DETR obtains performance comparable to the popular Faster R-CNN model [125] which is an impressive feat given its simple design. The DETR has also been extended to interesting applications in other domains, e.g., Cell-DETR [130] extends it for instance segmentation of biological cells. A dedicated attention branch is added to obtain instance-wise segmentations in addition box predictions that are enhanced with a CNN decoder to generate accurate instance masks. The DETR [13] model successfully combines convolutional networks with Transformers [1] to remove hand-crafted design requirements and achieves an end-to-end trainable object detection pipeline. However, it struggles to detect small objects and suffers from slow convergence and a relatively high computational cost [14]. DETR maps images to features space before using the Transformer for the relation modeling. Thus, the computational cost of self- attention grows quadratically with the spatial size of the feature map i.e., $\mathcal{O}(H^{2}W^{2}C)$, where $H$ and $W$ represent the height and width of the feature map. This inherently puts a limitation on the use of multi- scale hierarchical features [131] in DETR training framework which is ultimately important to detect small objects. Furthermore, at the beginning of training, the attention module simply projects uniform attention to all the locations of the feature map and requires a large number of training epochs to tune attention weights to converge to meaningfully sparse locations. This approach contributes to a slow convergence rate of DETR. To mitigate the above-mentioned issues, [14] proposed a deformable attention module to process the feature maps. Inspired from deformable convolutions [42], deformable attention module [14] only attends to sparse set of elements from the whole feature map regardless of its spatial size. This further allows cross-scale aggregation of feature maps with the help of multi-scale attention modules without increasing the computational cost significantly. Deformable DETR not only performs better but its training time also remains 10$\times$ lower than the original DETR model [14]. Anchor DETR [122] replaces the learnable query tokens in [13] with anchor-point based queries, such that each query focuses on predicting the object near the anchor point. The anchor points can be fixed on 2D grid, or learned from uniformly distributed points. Anchor DETR [122] requires 10 $\times$ fewer training epochs with comparable performance. Pix2Seq [123] is a generic Transformer-based framework, without any specialized task-specific modules, and learns to directly produce a sequence of tokens with object descriptions (bounding-boxes and class-labels). A quantization and serialization scheme first converts bounding boxes and class- labels into a sequence of discrete tokens. A generic Transformer based encoder-decoder network is then used to generate these tokens in an auto- regressive manner conditioned on previous predictions and image features. #### 3.3.2 Detection with Pure Transformers You Only Look at One Sequence (YOLOS) [124] is a simple, attention-only architecture directly built upon the ViT [132, 1]. It replaces the class-token in ViT with multiple learnable object query tokens, and the bipartite matching loss is used for object detection similar to [13]. YOLOS demonstrates the flexibility of ViTs to object detection, in a pure sequence-to-sequence learning manner, with minimal image related 2D inductive biases. In similar spirit, PVT [93] is combined with DETR [13] to perform object detection with an end-to-end transformer pipeline. We note that it is feasible to combine other recent ViTs with transformer based detection heads as well to create pure ViT based designs [124], and we hope to see more such efforts in future. Figure 8: Axial attention module [133] that sequentially applies multi-head axial attention operations along height and width axes. Image from [133]. ### 3.4 Transformers for Segmentation Self-attention can be leveraged for dense prediction tasks like image segmentation that requires modeling rich interactions between pixels. Below, we discuss axial self-attention [133], a cross-modal approach [15] that can segment regions corresponding to a given language expression, and ViTs based segmentation architectures [134, 101, 135]. Panoptic segmentation [136] aims to jointly solve the otherwise distinct tasks of semantic segmentation and instance segmentation by assigning each pixel a semantic label and an instance id. Global context can provide useful cues to deal with such a complex visual understanding task. Self-attention is effective at modeling long-range contextual information, albeit applying it to large inputs for a dense prediction task like panoptic segmentation is prohibitively expensive. A naive solution is to apply self-attention either to downsampled inputs or to limited regions around each pixel [81]. Even after introducing these constraints, the self-attention still has quadratic complexity and sacrifices the global context. To tackle these issues, Wang et al. [133] propose the position-sensitive axial-attention where the 2D self- attention mechanism is reformulated as two 1D axial-attention layers, applied to height-axis and width-axis sequentially (see Fig. 8). The axial-attention is compute efficient and enables models to capture the full-image context. It achieves competitive performance for the panoptic segmentation task on COCO [75], Mapillary Vistas [137], and Cityscapes [73] benchmarks and for the image classification on ImageNet dataset [138]. Cross-modal Self-attention (CMSA) [15] encodes long-range multi-modal dependencies between linguistic and visual features for referring image segmentation task, that aims to segment entities in an image referred by a language description.For this purpose, a set of cross-modal features is obtained by concatenating image features with each word embedding and the spatial coordinate features. The self-attention operates on these features and generates attention over the image corresponding to each word in the sentence. The segmentation network then performs self-attention at multiple spatial levels and uses a gated multi-level fusion module to refine segmentation masks via information exchange across multi-resolution features. A binary CE loss is used to train the overall model that achieves good improvements on UNC [139], G-Ref [140] and ReferIt [141] datasets. While the segmentation approaches discussed above insert self-attention in their CNN based architectures, some recent works have proposed transformer based encoder-decoder architectures. Segmentation Transformer (SETR) [134] has a ViT encoder, and two decoder designs based upon progressive upsampling, and multi-level feature aggregation. SegFormer [101] has a hierarchical pyramid ViT [93] (without position encoding) as an encoder, and a simple MLP based decoder with upsampling operation to get the segmentation mask. Segmenter [135] uses ViT encoder to extract image features, and the decoder is a mask Transformer module which predicts segmentation masks, using learnable mask tokens and image-patch tokens as inputs. The authors also propose a baseline linear decoder which projects the patch-embeddings to classification space, thus producing coarse patch-level labels. ### 3.5 Transformers for Image and Scene Generation Here, we discuss Transformer-based architectures [142, 143, 144, 145, 146, 23] for image synthesis, which is interesting from the perspective of generative modeling and learning unsupervised representations for down-stream tasks. Parmar et al.[142] develop an image generation model that can sequentially predict each pixel of an output image given its previously generated pixels (Fig. 9). Their approach models the joint distribution of the image pixels by factorizing it as a product of pixel-wise conditional distributions. Previously developed auto-regressive models for this task, such as the PixelCNN [147], suffer from a limited receptive field which hinders in modeling long term relationships in an image _e.g._ , part relationships or occlusions. Using self-attention, [142] enhances the receptive field without incurring a high computational cost (_e.g._ , effective receptive field up to 256 pixels can be achieved as compared to 25 pixels of PixelCNN [147]). The generative pipeline was also tested on conditional generation tasks _e.g._ , image super-resolution, image completion, and denoising. Figure 9: (a) Self-attention block in Image Transformer [142]. Given one channel for a pixel $q$, the block attends to the memory of previous synthesized pixels ($m_{i}$), followed by a feed-forward sub-network. Positional encodings $p_{i}$ are added in the first layer. (b) The operation performed in Local Self-Attention (example of a 2D case is shown). The image is partitioned into a grid of spatial blocks known as query blocks. In the self-attention operation, each pixel in a query block attends to all pixels in the memory block (shown in cyan rectangle). White grid locations show masked inputs that have zero-contribution towards the self-attention. Inspired by the success of GPT model [5] in the language domain, image GPT (iGPT) [143] demonstrated that such models can be directly used for image generation tasks, and to learn strong features for downstream vision tasks (e.g., image classification). Specifically, iGPT trains GPT v2 model [5] on flattened image sequences (1D pixel arrays) and shows that it can generate plausible image outputs without any external supervision. The generated samples depict the model’s ability to understand spatial relationships between pixels and high-level attributes such as object classes, texture, and scale. Notably, the design does not use any image-specific knowledge in the design (e.g., the 2D position embeddings used in Image Transformer [142]). The features learned with iGPT’s unsupervised training mechanism compete impressively against other unsupervised approaches, achieving state-of-the-art performance on CIFAR-10/100 [148] and STL [149] datasets while performing comparably to SimCLR (a contrastive learning approach) [150] on ImageNet dataset. This is an astounding result, since the iGPT architecture is exactly the same as used for language modeling tasks, and therefore it does not incorporate any prior domain-specific knowledge. Notably, the competing unsupervised CNN based solutions widely adopt such priors in the form of architectural design, attention mechanisms, loss functions, and regularization [151, 152, 117, 153, 154]. However, on the downside, iGPT has a high compute cost _e.g._ , iGPT-L version has roughly $36\times$ high training cost compared to MoCo [117] which is a state of the art self-supervised feature learning approach. For this reason, the training was generally limited to low- resolution of $\leq 64\times 64$, while convolutional architectures can effectively learn from high-resolution inputs. Transformers typically incur a high compute cost when applied on high- dimensional sequences. To overcome this limitation, Esser et al.[144] proposed to include inductive biases (commonly used in the CNNs) alongside Transformers to improve their efficiency. Specifically, local connectivity and spatial invariance biases inbuilt in the CNN structure are leveraged by learning a rich dictionary of visual patterns (using a Generative Adversarial approach). A Transformer is then used to learn the long-range interactions between the dictionary items to generate the outputs. In turn, they develop a conditional image generation model capable of producing very high-resolution images (up to megapixel range) using Transformers. This is the first work that demonstrates the application of Transformers to generate such high-resolution images. Generative Adversarial Networks (GANs) [54] with CNNs as default backbone have been very successful for visually appealing image synthesis [155, 156, 157]. TransGAN [145] builds a strong GAN model, free of any convolution operation, with both generator and discriminator based upon the Transformer model [1]. The architecture of both generator and discriminator is based upon the encoder in original Transformer model [1]. For memory efficiency, the generator contains multiple stages, with up-sampling modules in-between, which gradually increase the resolution of feature maps (input sequence length) while reducing the embedding dimension. The discriminator of TransGAN takes flattened image- patches as tokens similar to [132]. Authors introduce different training techniques including data augmentation, training with an auxiliary task and injecting locality to self-attention to scale-up their model for high quality image synthesis [144]. The TransGAN model achieves state-of-the-art results in terms of Inception Score and Fréchet Inception Distance (FID) on STL-10 and performs favorably compared with their CNN-based GAN counterparts on other datasets. Unlike previous image generation methods [142, 143, 144], which directly predict image outputs, [23] learns to generate parameters of 3D objects to be placed in a given scene. Specifically, SceneFormer [23] studies the 3D room layout conditioned scene generation task. Given the empty room shape, [23] can propose new object configurations in the room while maintaining realism. Remarkably, the model does not use any appearance information and only learns to generate new scenes by modeling the inter-object relationships using self- attention in Transformers. Similar to how a Transformer operates on a sentence, it is applied to a sequence of objects to predict the next suitable object in a scene. Specifically, the size, pose, location, and category of the next object is predicted by the Transformer model. A start token indicates the initiation of inference and the number of output token indicate the objects generated by the model in a sequence. The authors also explore generating new scenes given a textual description of the room layout. The independence from the appearance makes the approach efficient, enabling interactive scene generation. The task of generating realistic images from text is interesting and practically valuable (_e.g._ , for artistic content creation), but at the same time highly challenging. Prior text-to-image synthesis approaches [158, 159, 160, 161] are mostly based on GANs [54]. Although these methods produce encouraging results, they are far from being photo-realistic. Ramesh et al. [20] recently proposed DALL·E which is a Transformer model capable of generating high-fidelity images from a given text description. DALL·E model has 12 billion parameters and it is trained on a large set of text-image pairs taken from the internet. Before training, images are first resized to 256$\times$256 resolution, and subsequently compressed to a 32$\times$32 grid of latent codes using a pre-trained discrete variational autoencoder [162, 163]. DALL·E takes as input a single stream of 1280 tokens (256 for the text and 1024 for the image), and is trained to generate all other tokens autoregressively (one after another). It provides flexibility to generate images either from scratch (Fig. 10(a)) or by extending existing images (Fig. 10(b)), while staying faithful to the text caption. (a) (b) (c) (d) (e) (f) (g) Figure 10: Images generated by DALL·E [20] from the following text prompts. (a) _An armchair in the shape of an avocado._ (b) _A photo of San Francisco’s golden gate bridge._ Given a part of the image (in green box), DALL·E performs the image completion. (c) _An emoji of a baby penguin wearing a blue hat, red gloves, green shirt, and yellow pants._ (d) _An extreme close-up view of a capybara sitting in a field._ (e) A cross-section view of a pomegranate. (f) _A penguin made of watermelon._ (g) _The exact same cat on the top as a sketch on the bottom._ The authors demonstrate the effectiveness of DALL·E by creating images from text describing a wide variety of real and fictional concepts. While generating images purely from textural captions, DALL·E shows impressive performance at controlling multiple objects and their attributes (Fig. 10(c)), rendering certain viewpoint (Fig. 10(d)), capturing object’s internal structure (Fig. 10(e)), and combining unrelated objects (Fig. 10(f)). Furthermore, DALL·E can perform image-to-image translation (Fig. 10(g)) guided by the input text. ### 3.6 Transformers for Low-level Vision After witnessing the success of Transformer models in high-level vision problems, numerous Transformer-based methods have been proposed for low-level vision tasks, including image super-resolution [16, 164, 19], denoising [165, 19], deraining [165, 19], and colorization [24]. Image restoration requires pixel-to-pixel correspondence from the input to the output images. One major goal of restoration algorithms is to preserve desired fine image details (such as edges and texture) in the restored images. CNNs achieve this by employing a single-scale architecture design that does not involve any downsampling operation. Since the computational complexity of self-attention in Transformer models increases quadratically with number of image patches, it is infeasible to develop Transformer model that can operate on single-scale feature processing pipeline. Consequently, these Transformer-based image restoration models make use of various strategies to reduce the computational burden, such as computing attention on local image windows [164], performing spatial reduction attention [166], and employing encoder-decoder design [19, 165]. Here, we briefly discuss a few image restoration Transformer models. #### 3.6.1 Transformers for Image Processing Tasks Top performing algorithms for high-level computer vision tasks such as object detection and semantic segmentation often employ backbone models that are pre- trained on large-scale datasets _e.g._ , ImageNet. In contrast, algorithms for low-level vision tasks such as image denoising, super-resolution, and deraining are directly trained on task-specific data, thereby suffer from these limitations: (i) small number of images available in task-specific datasets (_e.g._ , the commonly used DIV2K dataset for image super-resolution contains only 2000 images), (ii) the model trained for one image processing task does not adapt well to other related tasks. Chen et al. [19] propose a pre-trained model based on Transformer architecture, named as Image Processing Transformer (IPT). It is capable of performing various image restoration tasks such as super-resolution, denoising, and deraining. The overall architecture of IPT consists of multi- heads and multi-tails to deal with different tasks separately, and a shared encoder-decoder Transformer body. Since exploiting Transformers at full potential requires training on large-scale data, [19] takes the clean (ground- truth) images from the ImageNet benchmark and synthesize their degraded versions for different tasks. For example, bicubic interpolation is used for generating low-resolution images, additive white Gaussian noise is added to prepare noisy data, and hand-crafted rain streaks are applied to obtain rainy images. In total, 10 million images are used to pre-train the IPT model. During training, each task-specific head takes as input a degraded image and generates visual features. These feature maps are divided into small crops and subsequently flattened before feeding them to the Transformer encoder (whose architecture is the same as [1]). The outputs of the encoder along with the task-specific embeddings are given as input to the Transformer decoder. The features from the decoder output are reshaped and passed to the multi-tail that yields restored images. The IPT model is optimized with L1 loss. Experimental results show that the pre-trained IPT model, when fine-tuned for a specific low-level vision task, can provide significant performance gains over the state-of-the-art methods [167, 168, 169]. #### 3.6.2 Transformers for Super-Resolution Recent years have seen major performance breakthroughs for super-resolution (SR) due to convolutional neural networks (CNNs). Principally, the quality of super-resolved images generated by CNNs is dependent on the choice of optimization objective. While the SR methods [170, 171, 172, 167, 173] that are based on pixel-wise loss functions (_e.g._ , L1, MSE, etc.) yield impressive results in terms of image fidelity metrics such as PSNR and SSIM, they struggle to recover fine texture details and often produce images that are overly-smooth and perceptually less pleasant. Further, _perceptual_ SR approaches [174, 175, 176, 177, 52], in addition to per-pixel loss, employ adversarial loss [54] and perceptual loss [178] based on deep features extracted from pre-trained CNNs. While these methods generate images that are sharp, visually pleasant, and perceptually plausible, they show a substantial decrease in reconstruction accuracy measured in PSNR/SSIM. Moreover, the perceptual SR algorithms have a tendency to hallucinate fake textures and cause artifacts. The above mentioned SR approaches follow two distinct (but conflicting) research directions: one maximizing the reconstruction accuracy and the other maximizing the perceptual quality, but never both. Figure 11: Diagram of the texture Transformer module. $Q$ (query), $K$ (key) and $V$ (value) represent texture features extracted from a (bicubic upsampled) low-resolution image, a sequentially down/upsampled reference image, and an original reference image, respectively. The relevance embedding aims to estimate similarity between low-resolution and reference images. $H$ and $S$ respectively denote hard and soft attentions computed from relevance embedding. $T$ indicates high-resolution texture features that are then transferred to the features $F$ of low-resolution image. Figure is from [16]. To alleviate the trade-off between perceptual reproduction and accurate reproduction, Yang et al.[16] propose a Transformer network (TTSR) for super- resolution. During training, TTSR uses paired LR-HR images, as well as reference (Ref) images with similar content as of LR images. TTSR learns to search relevant regions in the Ref image and transfers rich textures to help super-resolving the input LR image. The texture Transformer module of TTSR method (see Fig. 11) consists of four core components: (1) _Learnable texture extractor:_ takes as input LR$\uparrow$, Ref$\downarrow\uparrow$, and Ref images, and generates texture features query (Q), key (K), and value (V), respectively. Here, $\uparrow$ denotes bicubic upsampling operation, and $\downarrow\uparrow$ represents bicubic down-sampling followed by an upsampling operation. (2) _Relevance embedding:_ first unfolds Q and K into patches and then computes the similarity of each patch in Q with each patch in K in order to generate hard and soft attention maps. (3) _Hard-attention:_ transfers HR texture features from V to (LR features) Q using the hard attention map. (4) _Soft-attention:_ further enhances relevant features while suppressing less relevant ones. While TTSR [16] method deals with reference-based image super-resolution, most of the research is conducted on single image super-resolution problem in which only LR-HR paired images are available. Since the computational complexity of the original self-attention operation is prohibitively high for high- resolution images, recently a few efficient transformer models have been proposed that employ window-based attention (SwinIR [164]) and spatial resolution reduction operation in attention module (ESRT [166]) to perform super-resolution. #### 3.6.3 Colorization Transformer Given a grayscale image, colorization seeks to produce the corresponding colorized sample. It is a one-to-many task as for a given grayscale input, there exist many possibilities in the colorized output space. The challenging nature of this task requires probabilistic models capable of producing multiple colorized output samples. Colorization Transformer [24] is a probabilistic model based on conditional attention mechanism [179]. It divides the image colorization task into three sub-problems and proposes to solve each task sequentially by a different Transformer network. The authors first train a Transformer network to map a low-resolution grey-scale image to a 3-bit low- resolution colored image. Low-resolution images in turn allow training of larger models. The 3-bit low-resolution colored image is then upsampled to an 8-bit RGB sample by another Transformer network in the second stage of training. Finally, a third stage Transformer is trained to increase the spatial resolution of the 8-bit RGB sample produced by the second-stage Transformer. Self-attention used in the colorization Transformer is based on row/column attention layers introduced in [179]. These layers capture the interaction between each pixel of an input image while being computationally less costly. The row-wise attention layer applies self-attention to all pixels in a given row, while the column-wise attention layer considers pixels only in a given column of an image. This work [24] is the first successful application of Transformers trained to colorize grey-scale images at high (256$\times$256) resolution. Figure 12: An overview of Transformer models used for multi-modal tasks in computer vision. The Transformer designs in this category can be grouped into single-stream (UNITER [43], OSCAR [44], VideoBERT [17], Unicoder-VL [180], VisualBERT [63] and VL-BERT [22]) and dual-stream architectures (LXMERT [21], ViLBERT [181] and PEMT [182]). A key distinction between models is the choice of loss functions. While most of the multi-modal methods are focused on images as visual data, VideoBERT [17] and PEMT [182] are designed to work on video streams and leverage unique modalities e.g., audio signals in videos [182]. ### 3.7 Transformers for Multi-Modal Tasks Transformer models have also been extensively used for vision-language tasks such as visual question answering (VQA) [183], visual commonsense reasoning (VSR) [184], cross-modal retrieval [185] and image captioning [29]. Several works in this direction target effective vision-language pre-training (VLP) on large-scale multi-modal datasets to learn generic representations that effectively encode cross-modality relationships (_e.g._ , grounding semantic attributes of a person in a given image). These representations can then be transferred to downstream tasks, often obtaining state of the art results. Notably, several of these models still use CNNs as vision backbone to extract visual features while Transformers are used mainly used to encode text followed by the fusion of language and visual features. Such models generally apply the vanilla multi-layer Transformer [1] with multi-modal inputs and do not introduce fundamental changes to the core attention block. However, their main distinction is in the configuration of Transformers and the loss functions, based on which we categorize them into: (a) Multi-stream Transformers (see Sec. 3.7.1) and (b) Single-stream Transformers (see Sec. 3.7.2). The _single-stream_ designs feed the _multi-modal_ inputs to a single Transformer while the multi-stream designs first use independent Transformers for each modality and later learn cross-modal representations using another Transformer (see Fig. 12). Besides these vision language pretraining methods, we also explain visual grounding approaches towards the end of this section (see Sec. 3.7.3). #### 3.7.1 Multi-stream Transformers Vision and Language BERT (ViLBERT) [63] was the first extension of the BERT model to the multi-modal domain. The goal was to learn representations that can jointly model images and natural language. For this purpose, ViLBERT developed a two-stream architecture where each stream is dedicated to model the vision or language inputs (Fig. 12-h). The architecture of both parallel streams is a series of Transformer blocks similar to the BERT model. Subsequently, co-attentional Transformer layers are applied to learn cross- modal relationships. The co-attentional framework is very simple. Query, key, and value matrices are computed for each modality in the standard way [1] and then key-value pairs for one modality are passed on to the other modality’s attention head. ViLBERT applies VLP on a set of proxy tasks defined on the Conceptual Concepts dataset (with 3.3M images with weak captions) and later fine-tune the model on downstream tasks such as VQA. The pre-training phase operates in a self- supervised manner, i.e., pretext tasks are created without manual labeling on the large-scale unlabelled dataset. These pretext tasks include predicting whether the text and image inputs are related and predicting the semantics of masked image regions and textual inputs (_e.g._ , similar to reconstructing masked words in text in the BERT model [3]). This way, the model learns the inherent structure in the data during pre-training and also models cross- domain associations. With evaluations on several tasks, [17] demonstrated that a two-stream model can perform better than a single-stream model that uses shared parameters to model both language and vision domains [17]. Similar to ViLBERT [181], Learning Cross-Modality Encoder Representations from Transformers (LXMERT) [21] also uses a two-stream architecture based on BERT framework. The main difference lies in the object-relationship encoder that is used to model the visual features instead of simple image-level features used in ViLBERT. The information in two streams is then fused across modalities using cross-attention blocks similar to [181]. Compared to two pre-texts tasks used for VLP in [181], LXMERT uses five pre- training tasks including masked object and language prediction, cross-modality matching, and visual question answering (Fig. 12-g). The pre-trained model is fine-tuned on the VQA task, however, a high similarity between pre-training and fine-tuned tasks raises questions on the generalizability of the learned representations to new tasks. To this end, the authors conducted generalization experiments on Visual Reasoning for Real (NLVR) task [186] demonstrating impressive improvements on novel tasks. Lee et al.[182] note that the multi-modal representation learning approaches like VideoBERT [17] and ViLBERT [181] generally keep the language processing part fixed to a pre-trained model (_e.g._ , BERT [3]) to reduce training complexity. For the first time in the literature, they propose to learn an end-to-end multi-modal bidirectional Transformer model called PEMT on audio- visual data from unlabeled videos. First, short-term (_e.g._ , 1-3 seconds) video dynamics are encoded using CNNs, followed by a modality-specific Transformer (audio/visual) to model long-term dependencies (_e.g._ , 30 seconds). A multi-modal Transformer is then applied to the modality-specific Transformer outputs to exchange information across visual-linguistic domains. However, learning such a model in a naive form would incur huge memory requirements. To reduce parametric complexity, the parameters are shared across layers within each Transformer which leads upto 80% parameter reduction. The Transformer is trained using a contrastive learning approach based on a content-aware negative sampling (Fig. 12-i). Specifically, the model uses the features obtained from CNNs learned during the training phase to select negative samples that are visually similar to the positive instances. This work also compares various fusion strategies adopted in earlier works such as early (VideoBERT [17] and VL-BERT [22]), mid-level (ViL- BERT [181] and LXMERT [21]) and late fusion mechanisms and shows that the mid- level fusion is the optimal choice. The proposed model is pre-trained on Kinetics-700 [187] dataset and later fine-tuned on downstream video classification tasks such as short video classification on UCF101 [188], audio classification on ESC50 [189] and long-term action recognition on Charades [190] and Kinetics-Sounds [65] datasets. Tan and Bansal [191] introduce the concept of ‘ _vokens_ ’ (images related to language tokens extracted from sentences). The vokens (visualized tokens) provide visual supervision to the language model to learn better features. The motivation is that humans learn languages by correlating visual information with semantic concepts. In a similar spirit to other self-supervised language representation learning methods [181, 3], they learn representations by defining an auxiliary task of voken-prediction task. Since the existing datasets encode limited visually grounded tokens, they propose a vokenization method to map language tokens to visual vokens, as illustrated in Fig. 13. The approach uses language-based retrieval for such a mapping and transfers a model trained on a small labeled dataset (MS-COCO) to a large dataset (Wikipedia). Furthermore, it was ensured that the sentence-wide context is considered to obtain the token-voken mapping. The resulting model trained using generated tokens outperforms the state of the art BERT model on a diverse set of NLP tasks. In this sense, the proposed model does not evaluate vision tasks, however, uses vision as a useful grounding cue to train the language model, hence we include it in the multi-modal representation learning group. Vision-and-Language Navigation (VLN) aims to predict a navigation plan on a map based on the vision and language inputs. Transformer models were used earlier in [192, 193] for VLN task. These works first pre-train a cross-modal Transformer using self-supervision on vision and language pairs and subsequently fine-tune on the specific VLN tasks. While these works learn attention between image region and language, Chen et al.[194] propose to learn cross-modal attention between language inputs and spatial topological maps (to represent an agent’s environment as a graph whose nodes denote places and the edges denote their connectivity). Given the topological map and natural language inputs, a VLN task using the Transformer model bears resemblance to sequence prediction in NLP. Specifically, at each time instance, the cross- modal Transformer predicts a single node of the topological map in the navigation plan. The individual language and map encodings are first processed using uni-modal encoders and later a cross-modal encoder (similar to LXMERT [21]) is applied to aggregate information across modalities. To denote positions in the map, a learned trajectory position encoding is appended with the map features. Based on this Transformer setup, [194] reports a full navigation system that can freely explore the environment and intelligently plan its actions. CLIP [195] is a contrastive approach to learn image representations from text, with a learning objective which maximizes similarity of correct text-image pairs embeddings in a large batch size. Specifically, given a batch of $N$ image-text pairs, CLIP learns a multi-modal embedding space, by jointly training an image-encoder and a text-encoder, such that the cosine similarity of the valid $N$ image-text pairs is maximized, while the remaining $N^{2}-N$ pairs is minimized. The authors consider ResNet-50 [67] and Vision Transformer (ViT) [132] for encoding images. The modified Transformer model [1] as in [5] is employed for encoding text. CLIP is trained on a large corpus of 400 million image-text pairs and demonstrates excellent zero-shot transfer capabilities. At inference, the names of classes are used as input to the text-encoder, and similarity of the encoded image is computed with all encoded texts (classes) to find the image-text pair with highest match. The CLIP achieves an astounding zero-shot classification accuracy of 75% on ImageNet, without using an supervision from ImageNet training set. The authors further demonstrate zero-shot transfer capabilities of the CLIP model on 30 different computer vision benchmarks. Note that CLIP with ResNet took 18 days to train on 592 V100 GPUs while CLIP with ViT took 12 days on 256 V100 GPUs. This highlights the computational cost of CLIP. #### 3.7.2 Single-stream Transformers Different from two-stream networks like ViLBERT [181] and LXMERT [21], VisualBERT [63] uses a single stack of Transformers to model both the domains (images and text). The input sequence of text (_e.g._ , caption) and the visual features corresponding to the object proposals are fed to the Transformer that automatically discovers relations between the two domains. Notably, VisualBERT architecture is somewhat similar to VideoBERT [17] (explained in Sec. 3.8), but instead of only focusing on cooking videos, VisualBERT evaluates on various visual-linguistic tasks (_e.g._ , VCR, NLVR, VQA, and visual grounding). The VisualBERT model first applies task-agnostic pre-training using two objectives (Fig. 12-e). The first objective simply attempts to predict missing text tokens using the image features and remaining textual tokens. The second objective attempts to differentiate between the true and false caption of a given image. After task-agnostic pre-training, the authors propose to perform task-specific pre-training to bridge the domain gap before the final fine-tuning to the downstream task. Su et al.[22] propose a multi-modal pre-training approach to learn features that are generalizable to multi-modal downstream tasks such as Visual Commonsense Reasoning and Visual Question Answering. This endeavor requires adequately aligning the visual and linguistic cues so that an effective composite representation is learned. To the end, [22] builds on the BERT model and inputs both the visual and language features. The language features correspond to the token in the input sentence and the visual features correspond to the region of interest (RoI) from the input image (obtained via a standard Faster R-CNN). Specifically, the model is pre-trained on both the visual-lingual dataset (Conceptual Captions [196]) as well as the language- only datasets (_e.g._ , Wikipedia). The loss function is identical to BERT, where the model is trained to predict the masked out words or visual ROIs (Fig. 12-f). In contrary to other works such as UNITER [43], VL-BERT claims that the visual-linguistic matching tasks are not useful during pre-training, which is in contrast to evidence from later efforts [180]. Their results on several multi-modal tasks show their benefit over the language-only pre- training (_e.g._ , in BERT). Universal Encoder for Vision and Language (Unicoder-VL) [180] learns multi- modal representations using large-scale image-caption pairs. The language and image inputs are fed to a single Transformer model (with multiple successive encoders) to learn joint embeddings. To this end, it uses masked word prediction, masked object classification, and visual-linguistic matching as self-supervision tasks during pre-training (Fig. 12-d). Notably, the visual- linguistic matching is carried out only at the global level (i.e., image- sentence alignment). The model is evaluated on image-text retrieval, zero-shot learning, and visual commonsense reasoning where it performs better than the previous models such as ViLBERT [181] and VisualBERT [63]. This shows the significance of rich self-supervised tasks and advocates for a unified Transformer architecture to learn multi-modal features in a common framework. The Unified Vision-Language Pre-training (VLP) [197] model uses a single Transformer network for both encoding and decoding stages. This stands in contrast to BERT inspired VLP models [17, 198, 22, 63] which use independent encoder and decoder networks. Joint modeling of encoding and decoding stages allows the Unified VLP model to perform well for both image captioning and visual-question answering tasks, when fine-tuned on these individual tasks. The intuition for shared modeling of encoding and decoding stage stems from the need to better share cross-task information during pre-training. The unified model consists of a stack of 12 Transformer blocks, each with a self- attention layer followed by a feed-forward module. The self-supervised objectives used for pre-training include masked vision-language predictions. Here, the authors explore two variants i.e., bidirectional and sequence-to- sequence prediction of masked works where different context encodings are used for both types of objectives. The proposed approach is evaluated on COCO Captions, Flick 30K Captions and VQA 2.0 and obtains encouraging results compared to previous methods on image captioning and VQA [199]. Universal image-text representation (UNITER) [43] performs pre-training on four large-scale visual-linguistic datasets (MS-COCO [75], Visual Genome [200], Conceptual Captions [196] and SBU Captions [201]). The learned representations transfer well on downstream tasks such as VQA, Multi-modal retrieval, Visual Commonsense reasoning, and NLVR. In order to emphasize on learning the relationships between visual and language domains, [43] specifically designs pre-training tasks to predict masked visual or text region conditioned on the other domain input, and align language and visual inputs on both the global (image-text) and local (word-region) levels (Fig. 12-a). These tasks are beside the conventional masked language modeling task used in BERT and explicitly include fine-grained word-region alignment alongside conditional masking of inputs that were not considered in the earlier works such as VL-BERT [22], Visual-BERT [63], Vilbert [181] and Unicoder-VL [180]. Common to the other approaches, they adopt the Transformer architecture proposed in BERT that operates on both the visual and language embeddings. In contrast to applying independent Transformers to the language and visual inputs (as in ViLBERT [181] and LXMERT [21]), UNITER adopts a single Transformer applied to the textual and image inputs like [180, 63, 22]. VisualBert [63], Uniter [43], VL-BERT [22], VilBERT [181], and Unicoder-VL [180] models for VLP concatenate image and text features and leave it to the self-attention to automatically discover cross-modal relationships. This can complicate the visual grounding of semantic concepts in an image. To address this problem, Object-Semantics Aligned Pre-Training (Oscar) [44] first uses an object detector to obtain object tags (labels), which are then subsequently used as a mechanism to align relevant visual features with the semantic information (Fig. 12-b). The motivation is that the textual content generally pertains to major objects in the image, therefore by explicitly adding those image labels to the input, visual features can be better attended. Similar to BERT [3], Oscar uses a Masked Token Loss for VLP, where different tokens in the textual input and image tags are randomly masked and the model predicts these missing tokens. Further, it also uses a contrastive loss that discriminates between the original and noisy/fake image-tag pairs. The representations thus learned are fine-tuned on VQA, cross-modality retrieval, natural language reasoning, and image captioning tasks to obtain better performances compared to VLP methods that do not use object tags. The recent VinVL [202] approach extends Oscar for the object detection task and learns object instance-centered relationships between visual and language domains using an adapted pretraining scheme. The model is trained on a collection of datasets (MS-COCO, OpenImages, Visual Genome and Objects365) and was demonstrated to precisely relate semantic attributes with the visual information and provided better transferability to the downstream visual comprehension tasks. Figure 13: Visualized tokens (Vokens) [191]: A language model is visually supervised using closely related images that leads to better feature representations from the pretrained model. Figure from [191]. #### 3.7.3 Transformers for Visual Grounding Modulated DETR (MDETR) [203] has a CNN and BERT backbone to extract features from image and text inputs, respectively. The visual and text features are then separately linearly projected to a shared space, concatenated and fed to a transformer model (with an architecture similar to DETR) to predict the bounding boxes for objects corresponding to the queries in the grounding text. The model is trained by using a loss which predicts a uniform distribution over all relevant text query tokens specific to the predicted bounding boxes. An additional contrastive loss term ensures correspondence between visual and text embedding. TransVG [204] is a simple design, where visual and text features are fused together in a transformer module, and the bounding-box corresponding to the query is directly regressed using a learnable token (input to the Transformer module, along-with visual and text features). Referring Transformer [205] is also a simple one stage design where the text and image features are fused in a Transformer encoder, and the Transformer based decoder then directly regresses bounding boxes or segmentation masks. Visual Grounding with Transformer [206] has an encoder-decoder architecture, where visual tokens (features extracted from a pretrained CNN model) and text tokens (parsed through an RNN module) are processed in parallel with two distinct branches in the encoder, with cross-modality attention to generate text-guided visual features. The decoder then computes attention between the text queries and visual features and predicts query-specific bounding boxes. ### 3.8 Video Understanding Existing approaches for audio-video data analysis generally learn representations on short-length videos (up to a few seconds long), that allow them to encode only short-range dependencies [1, 32]. Long-range dependency modeling is desirable in various uni-modal and multi-modal learning tasks such as activity recognition [187, 71, 207, 208, 209]. Below, we explain recent approaches that seek to resolve this challenge using the expressivity of Transformer networks. It is important to note that several of these works [210, 182, 17, 18] still employ (pretrained) CNNs to encode image/frame-level features in the videos on top of which Transformers are applied to model wide context. A few exceptions include [211, 212, 209, 213] which obtain frame- level features also using the ViT based backbones. #### 3.8.1 Joint Video and Language Modeling The VideoBERT [17] model leverages Transformer networks and the strength of self-supervised learning to learn effective multi-modal representations. Specifically, VideoBERT uses the prediction of masked visual and linguistic tokens as a pretext task (Fig. 12-c). This allows modeling high-level semantics and long-range temporal dependencies, important for video understanding tasks. Given a video, [17] converts speech to text using off- the-shelf speech recognition systems and applies vector quantization (clustering) to obtain visual features from pre-trained video classification models. The BERT model is then directly applied to these concatenated sequences of language and visual tokens to learn their joint distribution. The model can be trained with only-text, video-only, and video+text domains. The resulting model showcases interesting capabilities for cross-modal predictions such as video generation from a given textual input (_e.g._ , captions or cooking recipe) and (video-based) future forecasting. The video+text model uses a visual-linguistic alignment task to learn cross-modality relationships. The definition of this pre-text task is simple, given the latent state of the [cls] token, the task is to predict whether the sentence is temporally aligned with the sequence of visual tokens. Further, the learned representations are shown to be very useful for downstream tasks such as action classification, zero-shot classification, and video captioning. Zhou et al.[210] explore Masked Transformers for dense video captioning. This requires generating language descriptions for all events occurring in a video. Existing works on this problem generally operate sequentially i.e., first detect events and then generate captions in separate sub-blocks. [210] proposes a unified Transformer network to tackle both tasks jointly, thereby seamlessly integrating the multi-modal tasks of event detection and captioning. First, a video encoder is used to obtain frame-wise representations followed by two decoder blocks focused on proposing the video events and the captions. Since untrimmed videos are considered, a masking network is used in the captioning decoder to focus on describing a single event proposal. Remarkably, [210] was the first approach to target dense video captioning using non-recurrent models and used self-attention in the encoder(applied on CNN derived features) to model broad range context between video frames. Experiments on ActivityNet Captions [214] and YouCookII [215] datasets showed good improvements over previous recurrent network and two- stage based approaches. #### 3.8.2 Video Action Recognition The traditional CNN based methods in video classification generally perform 3D spatio-temporal processing over limited intervals to understand videos. Neimark _et al._ [211] propose Video Transformer Network (VTN) that first obtains frame-wise features using 2D CNN and apply a Transformer encoder (Longformer [103]) on top to learn temporal relationships. Longformer is an attractive choice to process long sequences (with an arbitrary length $n$) due to its $\mathcal{O}(n)$ complexity. The classification token is passed through a fully connected layer to recognize actions or events. The advantage of using Transformer encoder on top of spatial features is two fold: (a) it allows processing a complete video in a single pass, and (b) considerably improves training and inference efficiency by avoiding the expensive 3D convolutions. This makes VTN particularly suitable for modeling long videos where interactions between entities are spread throughout the video length. Their experiments on Kinetics-400 dataset [71] with various backbones (ResNet [67], ViT [11] and DeiT [12]) shows competitive performance. Girdhar et al.[18] use a variant of Transformer architecture to aggregate person-specific contextual cues in a video for action classification and localization. Initially, the model uses a Faster-RCNN [125] style processing where a backbone model generates features that are forwarded to the Region Proposal Network to obtain object proposals. Then RoI pooling is applied to generate object-specific features. Multi-head self-attention [1] is then applied on top of the object features as a cascade of self-attention layers. In each Transformer unit, a particular person feature is treated as the ‘query’ (Q), while the features from the neighboring video clip are used as ‘key’ (K) and ‘value’ (V). The location information is explicitly encoded in the input feature map from which K, V and Q are derived, thus incorporating the positional information in the self-attention. For a given $400$$\times$$400$$\times$$64$ video clip, the key and value tensors are of size $16$$\times$$25$$\times$$25$$\times$$128$, while the query is $128$ dimensional vector. Although [18] uses only RGB stream, additional modalities like optical flow and audio signal (as in competing works) would further increase the compute complexity. Further, the Transformer model was found to be sub-optimal for action localization, perhaps due to its tendency to incorporate global information. Therefore, it is important to achieve the right trade-off between the global and local context for problems that demand precise delineation (_e.g._ , action localization and segmentation). Human action recognition based on skeleton representation requires understanding relationships between different joints of a body in a given frame as well as between different frames of a video. Plizzari et al.[216] proposed a two-stream Transformer network to model such relationships. They introduced spatial self-attention (SSA) to model relations between different body-joints (Fig. 14(a)) while temporal self-attention (TSA) to capture long- range inter-frame dependencies (Fig. 14(b)). They first used a small residual network to extract features from skeleton data and then used SSA and TSA modules to process those feature maps. SSA finds the correlation between each pair of joints independently, while TSA focuses on how features of a certain joint change between frames along the temporal dimension. The purpose of SSA is to discover relationships among the surrounding joints in the same way as the Transformer relates different words in a phrase. On the other hand, TSA finds long-range relations between frames, similar to how relations among phrases are built in NLP. The two streamed model achieves state-of-the-art results on NTU-RGB+D 60 [217] and NTU-RGB+D 120 [218] datasets. Multiscale Vision Transformers (MViT) [219] build a feature hierarchy by progressively expanding the channel capacity and reducing the spatio-temporal resolution in videos. They introduce multi-head pooling attention to gradually change the visual resolution in their pyramid structure. TimeSFormer [213] extends ViTs [132] to videos, by considering the video as a sequence of patches extracted from individual frames. To capture spatio-temporal relationships, they propose divided attention i.e., spatial and temporal attentions are separately applied within each block. TimeSFormer demonstrates SoTA performance on action recognition, and can be applied to clips over one minute. Another notable pure-transformer based model is the Video Vision Transformer (ViViT) [212]. First, the spatio-temporal tokens are extracted and then efficient factorised versions of self-attention are applied to encode relationships between tokens. However, they require initialization with image- pretrained models to effectively learn the ViT models. There has also been concurrent work on learning sound pretrained models using self-supervised learning with ViTs. An important recent effort is the long-short contrastive learning (LSTCL) framework [220], which reconstructs representations from different time-scales (narrow and broad) as auxiliary learning tasks and demonstrates good down-stream performance. #### 3.8.3 Video Instance Segmentation The Video Instance Segmentation Transformer (VisTR) [209] model extends DETR [13] for video object instance segmentation (VIS) task. Local features are obtained using a backbone CNN on a collection of video frames. An encoder and a decoder Transformer is used similar to DETR to frame the instance segmentation problem as a sequence to sequence prediction task. The input frame-level features are concatenated to form clip representations and the Transformer outputs instance predictions in a order that is consistent across frames. This integrates the object detection and tracking with-in a single unified architecture. The predicted outputs are matched with the ground-truth using bipartitie matching. Similar to Mask R-CNN [127], a separate head is used to predict the instance mask based on self-attention and 3D convolutions. The overall results are competitive among the single model approaches on YouTube VIS dataset [221], but performs somewhat lower compared to more complex CNN-based models such as MaskProp [222]. (a) Spatial Self-Attention (b) Temporal Self-Attention Figure 14: Spatial/Temporal Attention for Skeleton Data Representations. Relationships between body-joints and inter-frame dependencies are modeled using two dedicated self-attention modules. Figure is from [216]. ### 3.9 Transformers in Low-shot Learning In the few-shot learning settings, a support set is provided at the inference to adapt to a novel set of categories. Transformer models have been used to learn set-to-set mappings on this support set [26] or learn the spatial relationships between a given input query and support set samples [25]. In terms of absolute performance, the patch-wise spatial self-attention between query and support set images excels compared to an image level association learned in [26]. However, the patch-wise attention computation is computationally expensive. We elaborate on these approaches below. Doersch et al.[25] explore the utility of self-supervision and Transformer model for few-shot fine-grained classification, where distribution mismatch exists between training and evaluation phases. They develop Cross-Transformer model to relate a given query image with the few-examples available in the support set. To this end, the Transformer finds spatially similar regions in the query and support set images, and the corresponding features are then used to obtain class decisions for the query. The queries in the Transformer architecture are derived from the grid features obtained using the query image. Similarly, grid features from the support images are used to construct keys and values which are in turn used to derive attended outputs. This approach, besides a contrastive self-supervision based training mechanism, leads to the best performance on the challenging Meta-dataset [223]. Ye et al.[26] propose to adapt the few-shot embeddings learned on the base classes to the few-shot target classes during inference using a Transformer module. This leads to task-specific embeddings that perform better on the discriminative tasks such as few-shot classification. While many other set-to- set functions are also evaluated, such as Graph convolutional networks [224], Bidirectional LSTMs [32] and DeepSets [225], the best performance is achieved with the Transformer-based mapping. This is attributed to the better contextualization, task interpolation and extrapolation capability of Transformers and their permutation invariance while maintaining a relatively lower parameter complexity. The Transformer architecture in [26] follows the standard model [1]. The embeddings are adapted using a contrastive loss function for preserving discriminative properties (Fig. 15). The resulting model achieves strong performance on inductive, transductive, and generalized FSL tasks. Liu et al.[226] learn a multi-head self-attention based module, to integrate the visual representation learned by the models trained on different domains present in the meta-dataset [223]. The Universal Representation Transformer (URT) layer dynamically re-weights the representations from different domain- specific backbones, and proves very effective in handling few shot tasks across a variety of data distributions. Figure 15: An overview of FEAT [26]. Compared to the conventional instance embedding methods in FSL that keep the embedding function same for all tasks (a), FEAT uses a set-to-set function to adapt the embedding function to each FSL task (b). It evaluates several set-to-set functions and found the Transformer module to be the most suitable choice for FSL. Figure from [26]. ### 3.10 Transformers for Clustering Clustering aims to discover structure in the data by grouping similar data points together. It has numerous applications such as data visualization and interpretation, anomaly detection, and open-set categorization. Neural networks have been developed for set prediction problems [225, 227], however, the setpoints are processed individually which can lose information about inter-point relationships. Recent works employ Transformers that operate on set inputs called the Set Transformers (ST) [228] for _amortized_ clustering. Amortized clustering is a challenging problem that seeks to learn a parametric function that can map an input set of points to their corresponding cluster centers. Lee et al.[228] propose to learn such a mapping function using a Transformer architecture comprising of multi-head self-attention blocks [1]. The Transformer model is permutation invariant by design and allows encoding both pair-wise and higher-order relationships between the input points. However, a full Transformer would lead to a high computational cost of $\mathcal{O}(n^{2})$ in each self-attention layer, where $n$ is the number of points in the set. ST reduces this cost to $\mathcal{O}(mn)$ by using an Induced Self-Attention Block that uses a low-rank projection ($H\in\mathbb{R}^{m}$) to allow operating on large sets. The model was trained to learn optimal parameters that maximize the likelihood of a mixture of Gaussians (MoGs). Thus MoG parameters are estimated by the ST given a set of data points. Beyond amortized clustering, ST is a generic framework which can handle other set-input problems such as counting unique elements in an input set, multi-instance learning, set anomaly detection, and 3D point-cloud classification. More recently, [229] improves [228] by taking a sequential approach to cluster generation, thereby allowing assignment to a variable number of clusters. ### 3.11 Transformers for 3D Analysis Given the irregular (variable number of points) and permutation invariant nature of 3D point cloud representations, Transformers provide a promising mechanism to encode rich relationships between 3D data points. To this end, recent works [230, 231] are motivated by the capability of Transformers to learn set-functions. Specifically, [230] introduced a Point Transformer which uses vector attention to learn weights for each channel, while [231] suggest an alternate design where local 3D structure is explicitly encoded. The non- local nature of Transformers is exploited in [45] towards an accurate human pose and mesh reconstruction algorithm. We discuss these approaches below. Self-attention being a set-operator is ideally suited for processing point clouds, a 3D data representation that demands invariance to number of points and their permutations. Zhao et al. [230] propose a point Transformer layer that applies self-attention in the local neighborhood of 3D points. The proposed layer builds on vectorized self-attention network (SAN) [82] where attention weights are represented with vectors.Furthermore, a positional encoding is added both to the attention vector and transformed features (value vectors) to represent location information. The point Transformer layer is sandwiched between two linear layers to create a point Transformer block that is stacked multiple times in the developed network architecture. Their design also included transition down/up blocks to reduce/increase the number of points in the input (in a typical encoding-decoding pipeline style). The resulting architecture shows promising results on the 3D classification and segmentation tasks. Figure 16: Mesh Transformer architecture. The joint and vertex queries are appended with positional embeddings and passed through multiple self-attention layers to jointly regress 3D coordinates of joints and mesh vertices. Figure is from [45]. The Point Cloud Transformer (PCT) [231] is a parallel work to [230] and motivated by the permutation invariance property of Transformers. However, compared to [230], it is more directly based on the conventional Transformer architecture [1] and does not involve vector attention. The key modifications include a 3D coordinate-based position encoding, an offset attention module, and a neighbor embedding that encodes local 3D structure in point-clouds. Specifically, the offset attention layer calculates the difference between the self-attended features and the input features using element-wise subtraction. The local neighbor embedding simply finds self-attention relationships among a group of points instead of individual 3D points. Explicitly incorporating local neighbourhood information makes this a more efficient architecture compared to [230]. The method shows promising performance on 3D shape classification, normal estimation and segmentation tasks on ModelNet40 [232] and ShapeNet [233] datasets. The Mesh Transformer (METRO) [45] model targets 3D human pose and mesh reconstruction from a single 2D image. A key challenge here is to faithfully learn the non-local interactions between body-joints and mesh vertices (_e.g._ , hand and foot). The expressivity of Transformer network is used to jointly model _vertex to vertex_ relationships in a mesh as well as the _vertex to body-joint_ relationships. The self-attention mechanism can attend to any combination of vertices in the mesh, thereby encoding non-local relationships. The multi-layer Transformer architecture sequentially performs dimensionality reduction to map the 2D image to 3D mesh. Position encoding is performed using the 3D coordinates ($x$,$y$,$z$) of each vertex and each body-joint. Similar to masked language modeling in NLP, METRO uses masked vertex modeling (MVM) which randomly masks some percentage of input queries (see Fig. 16). The Transformer is tasked with regressing all the joints and vertices which helps encode inter-dependencies between them. METRO obtains state-of-the-art results on human mesh reconstruction on Human3.6M [234] and 3DPW [235] datasets. Since the approach does not depends on a parametric mesh model, it generalizes well to other reconstruction tasks such as 3D hand reconstruction [236]. Overall, this is the first effort to employ Transformers for 3D human reconstruction tasks and leads to fairly good results. ## 4 Open Challenges & Future Directions Despite excellent performance from Transformer models and their interesting salient features (Table I), there exist several challenges associated with their applicability to practical settings (Table II). The most important bottlenecks include requirement for large-amounts of training data and associated high computational costs. There have also been some challenges to visualize and interpret Transformer models. In this section, we provide an overview of these challenges, mention some of the recent efforts to address those limitations and highlight the open research questions. ### 4.1 High Computational Cost As discussed in Sec. 1, a strength of Transformer models is their flexibility to scale to high parametric complexity. While this is a remarkable property that allows training enormous sized models, this results in high training and inference cost (a detailed comparison between CNN and ViTs is shown in Table III). As an example, the BERT [3] basic model (with 109 million parameters) took around 1.89 peta-flop days222A peta-flop day is a measure of computation and equals to performing $10^{15}$ neural net operations per second for one complete day. for training, while the latest GPT3 [6] model (175 billion parameters) took around 3640 peta-flop days for training (a staggering $\sim$1925$\times$ increase). This comes with a huge price tag, _e.g._ , according to one estimate [237], GPT3 training might have cost OpenAI 4.6 million USD. Additionally, these large-scale models require aggressive compression (_e.g._ , distillation) to make them feasible for real-world settings. An empirical study on the scalability of Vision Transformers for number of parameters (ranging from five million to two billion), size of the training datasets (ranging from 30 million to three billion training images), and compute budget (1-10000 TPU core-days) is presented in [238]. From this study, We can draw the following conclusions (a) scaling up on compute, model and size of training samples improves performance (b) only large models (with more parameters) can benefit from more training data, and the performance of smaller models platueas quickly and can not leverage from additional data. This indicates that large scale models have the capacity to further enhance their representation learning capabilities. However, with the current designs, scaling upon Transformer models is expensive and compute prohibitive, thus necessitating the need for efficient designs. Task | Method | Design Highlights (focus on differences with the standard form) | Input Data Type | Label Type | Loss ---|---|---|---|---|--- Image Classification | ViT [11] | Directly adopted NLP Transformer Encoder for images, Mechanism to linearly embed image patches with positional embedding suitable for the Encoder. | 2D Image | Class labels | Cross-entropy | DeiT [12] | Transformer as s student while CNN as a teacher, Distillation tokens to produce estimated labels from teacher, Attention between class and distillation tokens. | 2D Image | Class labels | Cross-entropy, Distillation loss based on KL-divergence | CLIP [195] | Jointly train image and text encoders on image-text pairs, to maximize similarity of valid pairs and minimize otherwise | 2D Images & texts | Image-text pairs | Symmetric cross-entropy Object Detection | DETR [13] | Linear projection layer to reduce CNN feature dimension, Spatial positional embedding added to each multi-head self-attention layer of both encoder and decoder. Object queries (output positional encoding) added to each multi-head self-attention layer of decoder. | 2D Image | Class labels | Hungarian loss based on bipartite matching between predicted and ground truths | D-DETR [14] | Deformable Transformer consists of deformable attention layers to introduce sparse priors in Transformers, Multi-scale attention module. | 2D Image | Class labels | Hungarian loss Low Shot Learning | CT [25] | Self-supervised pretraining, Query-aligned class prototypes that provide spatial correspondence between the support-set images and query image. | 2D Image | Pretraining without labels and few-shot learning with Class labels | Normalized Cross-entropy Image Colorization | ColTran [24] | Conditional Row/column multi-head attention layers, Progressive multi-scale colorization scheme. | 2D Image | 2D Image | Negative log-likelihood of the images Action Recognition | ST-TR [216] | Spatial and Temporal self-attention to operates on graph data such as joints in skeletons. | Skeleton | Action Classes | Cross-entropy Super-resolution | TTSR [16] | Texture enhancing Transformer module, Relevance embeddings to compute the relevance between the low-resolution and reference image. | 2D Image | 2D Image | Reconstruction loss, Perceptual loss defined on pretrained VGG19 features. Multi-Model Learning | Oscar [44] | Transformer layer to jointly process triplet representation of image-text [words, tags, features], Masked tokens to represent text data. | 2D Image | Captions, Class labels, Object tags | Negative log-likelihood of masked tokens, Contrastive binary cross-entropy 3D Classification/Segmentation | PT [230] | Point Transformer block, Transition down block to reduce cardinality of the point set, Transition up for dense prediction tasks. | CAD models, 3D object part segmentation | Object and shape categories | Cross-entropy 3D Mesh Reconstruction | METRO [45] | Progressive dimensionality reduction across Transformer layers, Positional Encoding with 3D joint and 3D vertex coordinates, Masked vertex/joint modeling. | 2D Image | 3D Mesh + Human Pose | $L_{1}$ loss on mesh vertices and joints in 3D and 2D projection. Vision and Language Navigation | Chen et al.[194] | Uni-modal encoders on language and map inputs followed by a cross-modal transformer, Trajectory position encodings in the map encoder. | Instruction text + RGBD panorama + Topological Environment Map | Navigation Plan | Cross-entropy over nodes and [stop] action Referring Image Segmentation | CMSA [15] | Multimodal feature, Cross-modal self-attention on multiple levels and their fusion using learned gates. | 2D Image + Language expression | Segmentation mask | Binary cross-entropy loss Video Classification | Lee et al.[182] | Operates on real-valued audio-visual signals instead of tokens, Contrastive learning for pre-training, End-to-end multimodal transformer learning. | Audio-Visual | Activity labels | Contrastive InfoNCE loss and Binary cross-entropy TABLE I: A summary of key design choices adopted in different variants of transformers for a representative set of computer vision applications. The main changes relate to specific loss function choices, architectural modifications, different position embeddings and variations in input data modalities. Task | Method | Metric | Dataset | Performance | Highlights | Limitations ---|---|---|---|---|---|--- Image Classification | ViT [11] ICLR’21 | Top-1 Acc. | ImageNet | 88.55 | a) First application of Transformer (global self-attention) directly on image patches, b) Convolution-free network architecture, c) Outperforms CNN models such as ResNet. | a) Requires training on large-scale data _e.g._ , 300-Million images, b) Requires careful transfer learning to the new task, c) Requires large model with 632-Million parameters to achieve SOTA results. | DeiT [12] arXiv’20 | Top-1 Acc. | ImageNet | 83.10 | a) Successfully trains Transformer on ImageNet only, b) Introduces attention-based distillation method. c) Produces competitive performance with small (86-Million parameters) Transformers. | a) Requires access to pretrained CNN based teacher model thus performance depends on the quality of the teacher model. | Swin-T [36] arXiv’21 | Top-1 Acc. | ImageNet | 84.5 | a) Provides a general purpose backbone for different vision tasks e.g., classification, detection and segmentation b) A hierarchical design using shifted-windows operation. | a) Hard to train from scratch on smaller datasets b) Quadratic compute complexity inherent to the self-attention operation. Low-Shot Learning | CT [25] NeurIPS’20 | Top-1 Acc. | ImageNet COCO | 62.25 60.35 | a) Self-supervised pre-training mechanism that does not need manual labels, b) Dynamic inference using Transformer achieving stat-of-the-art results. | Proposed algorithm is limited in its capacity to perform on datasets that lack spatial details such as texture. Object Detection | DETR [13] ECCV’20 | AP | COCO | 44.9 | a) Use of Transformer allows end-to-end training pipeline for object detection, b) Removes the need for hand-crafted post-processing steps. | a) Performs poorly on small objects, b) Requires long training time to converge. | D-DETR [14] ICLR’21 | AP | COCO | 43.8 | a) Achieves better performance on small objects than DETR [13], b) Faster convergence than DETR [13] | Obtain SOTA results with 52.3 AP but with two stage detector design and test time augmentations. Image Colorization | ColTran [24] ICLR’21 | FID | ImageNet | 19.71 | a) First successful application of Transformer to image colorization, b) Achieves SOTA FID score. | a) Lacks end-to-end training, b) limited to images of size 256$\times$256. Action Recognition | ST-TR [216] arXiv’20 | Top-1 Acc. | NTU 60/120 | 94.0/84.7 | a) Successfully applies Transformer to model relations between body joints both in spatial and temporal domain, b) Achieves SOTA results. | Proposed Transformers do not process joints directly rather operate on features extracted by a CNN, thus the overall model is based on hand-crafted design. Super-Resolution | TTSR [16] CVPR’20 | PSNR/ SSIM | CUFED5 Sun80 Urban100 Manga109 | 27.1 / 0.8 30.0 / 0.81 25.9 / 0.78 30.1 / 0.91 | a) Achieves state-of-the-art super-resolution by using attention, b) Novel Transformer inspired architectures that can process multi-scale features. | a) Proposed Transformer does not process images directly but features extracted by a convolution based network, b) Model with large number of trainable parameters, and c) Compute intensive. Multi-Model Learning | ViLBERT [181] NeurIPS’19 | Acc./ mAP ($R@1$) | VQA [183]/ Retrieval [239] | 70.6/ 58.2 | a) Proposed Transformer architecture can combine text and visual information to understand inter-task dependencies, b) Achieves pre-training on unlabelled dataset. | a) Requires large amount of data for pre-training, b) Requires fine tuning to the new task. | Oscar [44] ECCV’20 | Acc./ mAP ($R@1$) | VQA [240]/ COCO | 80.37/57.5 | a) Exploit novel supervisory signal via object tags to achieve text and image alignment, b) Achieves state-of-the-art results. | Requires extra supervision through pre-trained object detectors thus performance is dependent on the quality of object detectors. | UNITER [43] ECCV’20 | Acc./ Avg. ($R@1/5/10$) | VQA [183]/ Flickr30K [241] | 72.47/83.72 | Learns fine-grained relation alignment between text and images | Requires large multi-task datasets for Transformer training which lead to high computational cost. 3D Analysis | Point Transformer [230] arXiv’20 | Top-1 Acc. IoU | ModelNet40 [232] | 92.8 85.9 | a) Transformer based attention capable to process unordered and unstructured point sets, b) Permutation invariant architecture. | a) Only moderate improvements over previous SOTA, b) Large number of trainable parameters around 6$\times$ higher than PointNet++ [242]. | METRO [45] arXiv’20 | MPJPE PA-MPJPE MPVE | 3DPW [235] | 77.1 47.9 88.2 | a) Does not depend on parametric mesh models so easily extendable to different objects, b) Achieves SOTA results using Transformers. | Dependent on hand-crafted network design. TABLE II: A summary of advantages and limitations of different Transformers based methods in different Tasks. (CT: Cross Transformers, AP: Average Precision, mAP: mean AP, IoU: Intersection over Union, FID: Fréchet inception distance, MPJPE: Mean Per Joint Position Error, MPVE: Mean Per Vertex Error). Method | #Param (M) | GFLOPs | Top-1 Acc (%) ---|---|---|--- ResNet18 [67]$\star$ | 11.7 | 1.8 | 69.8 EfficientNet-B3 [87]$\star$ | 12.0 | 1.8 | 81.6 DeiT-T [12] | 5.7 | 1.3 | 72.2 T2T-ViTt-7 [35] | 5.0 | 1.3 | 71.7 LocalViT-T [107] | 5.9 | 1.3 | 74.8 CrossViT-T [104] | 6.9 | 1.6 | 73.4 PVTv1-T [93] | 13.2 | 1.9 | 75.1 ResT-Lite [110] | 10.5 | 1.4 | 77.2 CaiT-XXX-24 [243] | 12.0 | 2.5 | 77.6 PVTv2-B1 [97] | 13.1 | 2.1 | 78.7 Lv-ViT-T [89] | 8.5 | – | 79.1 RegionViT-T [100] | 13.8 | 2.4 | 80.4 ResNet50 [67]$\star$ | 25.6 | 4.1 | 76.1 ResNeXt50-32x4d [244]$\star$ | 25.0 | 4.3 | 77.6 RegNetY-4G [86]$\star$ | 21.0 | 4.0 | 80.0 EfficientNet-B4 [87]$\star$ | 19.0 | 4.2 | 82.9 DeiT-S [12] | 22.1 | 4.6 | 79.9 PVTv1-S [93] | 24.5 | 3.8 | 79.8 LocalViT-S [107] | 22.4 | 4.6 | 80.8 CrossViT-S [104] | 26.7 | 5.6 | 81.0 TNT-S [88] | 23.8 | 5.2 | 81.3 Swin-T [36] | 29.0 | 4.5 | 81.3 NesT-T [111] | 17.0 | 5.8 | 81.5 T2T-ViTt-14 [35] | 21.5 | 5.2 | 81.5 CvT-13 [96] | 20.0 | 4.5 | 81.6 ResT-B [110] | 30.3 | 4.3 | 81.6 Twins-SVT-S [37] | 24.0 | 2.8 | 81.7 PVTv2-B2-Li [97] | 22.6 | 3.9 | 82.1 RegionViT-S [100] | 30.6 | 5.6 | 82.5 Lv-ViT-S [89] | 26.0 | 6.6 | 83.3 Method | #Param (M) | GFLOPs | Top-1 Acc (%) ---|---|---|--- ResNet101 [67] $\star$ | 44.7 | 7.9 | 77.4 ResNeXt101-32x4d [244]$\star$ | 44.2 | 8.0 | 78.8 RegNetY-8G [86]$\star$ | 39.0 | 8.0 | 81.7 EfficientNet-B5 [87] $\star$ | 30.0 | 9.9 | 83.6 CvT-21 [96] | 32.0 | 7.1 | 82.5 CaiT-S-24 [243] | 32.2 | 9.4 | 82.7 T2T-ViTt-19 [35] | 39.0 | 9.8 | 81.4 PVTv1-M [93] | 44.2 | 6.7 | 81.2 PVTv2-B3 [97] | 45.2 | 6.9 | 83.2 NesT-S [111] | 38.0 | 10.4 | 83.3 ResNet152 [67] $\star$ | 60.2 | 11.6 | 78.3 CaiT-S-36 [243] | 48.0 | 13.9 | 83.3 T2T-ViTt-24 [35] | 64.0 | 15.0 | 82.2 PVTv1-L [93] | 61.4 | 9.8 | 81.7 TNT-B [88] | 66.0 | 14.1 | 82.8 Swin-S [36] | 50.0 | 8.7 | 83.0 Twins-SVT-B [37] | 56.0 | 8.3 | 83.2 RegionViT-B [100] | 72.7 | 13.0 | 83.3 PVTv2-B4 [97] | 62.6 | 10.1 | 83.6 ResNeXt101-64x4d [244] $\star$ | 83.5 | 15.6 | 79.6 RegNetY-16G [86] $\star$ | 84.0 | 16.0 | 82.9 EfficientNet-B6 [87] $\star$ | 43.0 | 19.0 | 84.0 NesT-B [111] | 68.0 | 17.9 | 83.8 ViT-B/16 [11] | 86.6 | 17.6 | 79.8 DeiT-B/16 [12] | 86.6 | 17.6 | 81.8 Swin-B [36] | 88.0 | 15.4 | 83.3 Twins-SVT-L [37] | 99.2 | 14.8 | 83.7 PVTv2-B5 [97] | 82.0 | 11.8 | 83.8 Lv-ViT-M [89] | 56.0 | 16.0 | 84.1 TABLE III: A Comparative analysis between different vision transformer and CNN models in terms of their parameter complexity and top-1 (%) accuracy on ImageNet validation set. For a direct comparison, we consider models that are trained on ImageNet from scratch on input of size 224x224. $\star$ denotes pure CNN-based methods. In the language domain, recent works focus on reducing the high complexity of Transformer models (basically arising from the self-attention mechanism [1] where a token’s representation is updated by considering all tokens from the previous layer). For example, [245, 103] explore selective or sparse attention to previous layer tokens while updating each next layer token. Linformer [38] reduces complexity of standard self-attention operation from $\mathcal{O}(n^{2})$ to $\mathcal{O}(n)$ (both in time and memory requirements). The main idea is to show that a low-rank matrix is sufficient to model the self-attention mechanism. The Reformer model [246] employed locally-sensitive hashing (LSH) to minimize the complexity of self-attention from $\mathcal{O}(n^{2})$ to $\mathcal{O}(nlog(n))$. In similar pursuit, the recent Lambda Networks propose to model local context as a linear function which helps reduce complexity of self-attention [247]. These linear function lambdas are applied to the input query to model contextual relationships between pixels. Vyas et al.[248] developed an efficient _cluster attention_ to deal with large input sequences that approximates the original self-attention. The cluster attention groups queries into clusters and then computes attention between cluster centers (instead of attention between all the queries that leads to quadratic complexity). The main idea is that the queries close in the Euclidean space should have similar attention distributions. With a fixed number of clusters, this intuition helps reduce the quadratic complexity to linear complexity of $\mathcal{O}(nc)$ with respect to the input sequence length $n$ (where $c$ is the number of clusters). We refer interested readers to a survey on efficient Transformers in NLP [34]. Similar to the NLP domain, computer vision models also suffer from the high computational cost of Transformer models. For example, image generators that are based on sequence-based Transformers (_e.g._ , iGPT) have a high compute cost limiting their applicability to high-resolution inputs. The time and memory cost of core self-attention operation in Transformers increases quadratically with the number of patches, i.e. $\mathcal{O}(n^{2})$, for $n$ image patches (in some applications, e.g., low-level vision, $n=H\times W$ where $H,W$ denote the height and width of the image). This is a major drawback of existing Transformers that hinders their application to most tasks involving high-resolution (HR) images, such as object detection and segmentation (in high-level vision), and super-resolution, deblurring, denoising, etc. (in low-level vision). Numerous methods have been proposed that make special design choices to perform self-attention more ‘efficiently’, for instance employing pooling/downsampling in self-attention [97, 219, 249], local window-based attention [36, 250], axial-attention [179, 251], low-rank projection attention [38, 252, 253], kernelizable attention [254, 255], and similarity-clustering based methods [246, 256]. However, almost all of these approaches either come with a trade-off between complexity and accuracy, require special hardware specifications or are still not applicable to very large images. Therefore, there is a pressing need to develop an efficient self-attention mechanism that can be applied to HR images on resource-limited systems without compromising accuracy. It will be interesting to explore how existing models can be extended to high-dimensional cases _e.g._ , using a _multi-scale transformer_ design with a somewhat local context modeling. By inducing inductive biases based on our understanding of the visual learning tasks (e.g., spatial relationships in the local neighbourhood), the high computational cost can be reduced. Similarly, using sparse attention maps modeled with low-rank factorization in the matrices can also help towards reducing the computational cost [211]. ### 4.2 Large Data Requirements Since Transformer architectures do not inherently encode inductive biases (prior knowledge) to deal with visual data, they typically require large amount of training to figure out the underlying modality-specific rules. For example, a CNN has inbuilt translation invariance, weight sharing, and partial scale invariance due to pooling operations or multi-scale processing blocks. However, a Transformer network needs to figure out these image-specific concepts on its own from the training examples. Similarly, relationships between video frames need to be discovered automatically by the self-attention mechanism by looking at a large database of video sequences. This results in longer training times, a significant increase in computational requirements, and large datasets for processing. For example, the ViT [11] model requires hundreds of millions of image examples to obtain reasonable performance on the ImageNet benchmark dataset. The question of learning a Transformer in a data- efficient manner is an open research problem and recent works report encouraging steps towards its resolution. For example, DeiT [12] uses a distillation approach to achieve data efficiency while T2T (Tokens-to-Token) ViT [35] models local structure by combining spatially close tokens together, thus leading to competitive performance when trained only on ImageNet from scratch (without pre-training). By incorporating CNNs like feature hierarchies in ViTs to effectively capture local image cues, ViTs (e.g., CCT [106], NesT [111]) can be trained from scratch even on small-scale datasets (e.g., CIFAR-10). Another approach to data efficient training of ViTs is proposed in et al.[257]. The authors show that by smoothing the local loss surface using sharpness-aware minimizer (SAM) [258], ViTs can be trained with simple data augmentation scheme (random crop, and horizontal flip) [259], instead of employing compute intensive strong data augmentation strategies, and can outperform their counterpart ResNet models. ### 4.3 Vision Tailored Transformer Designs We note that most of the existing works focused on vision tasks tend to directly apply NLP Transformer models on computer vision problems. These include architectures designed for image recognition [11], video understanding [17] and especially multi-modal processing [181]. Although the initial results from these simple applications are quite encouraging and motivate us to look further into the strengths of self-attention and self-supervised learning, current architectures may still remain better tailored for language problems (with a sequence structure) and need further intuitions to make them more efficient for visual inputs. For example, vector attention from [82] is a nice work in this direction which attempts to specifically tailor self-attention operation for visual inputs via learning channel-wise attentions. Similarly, [260] uses a Jigsaw puzzle based self-supervision loss as a parallel branch in the Transformers to improve person re-identification. A recent work [35] rearranges the spatially close tokens to better model relationships in spatially proximal locations. Token distillation [12] from pre-trained CNN models has also been used as a remedy to inject domain biases in the representations. One may argue that the architectures like Transformer models should remain generic to be directly applicable across domains, we notice that the high computational and time cost for pre-training such models demands novel design strategies to make their training more affordable on vision problems. ### 4.4 Neural Architecture Search for ViTs While Nerual Architecuter Search (NAS) has been well explored for CNNs to find an optimized architecture, it is relatively less explored in Transformers (even for language transformers [261, 262]). Chen et al.[263] propose a one- shot NAS for vision transformers, called AutoFormer. BossNAS [264] searches for a hybrid architecture (CNN and Transformer). Another recent effort studies the trade-off between global and local information in Transformers in the context of vision applications [265]. It will be insightful to further explore the domain-specific design choices (e.g., the contrasting requirements between language and vision domains) using NAS to design more efficient and light- weight models similar to CNNs [87]. ### 4.5 Interpretability of Transformers Through an extensive set of carefully designed experiments, Naseer et al.[266] investigate multiple intriguing properties of ViTs in terms of their generalization and robustness. They show that, compared with CNNs, ViTs demonstrate strong robustness against texture changes and severe occlusions, _e.g._ ViTs retain upto 60% top-1 accuracy on ImageNet once 80% of the image content is randomly occluded. Given the strong performance of Transformer architectures, it is interesting and critical to interpret their decisions, _e.g._ , by visualizing relevant regions in an image for a given classification decision. The main challenge is that the attention originating in each layer, gets inter-mixed in the subsequent layers in a complex manner, making it difficult to visualize the relative contribution of input tokens towards final predictions. This is an open problem, however, some recent works [267, 268, 269] target enhanced interpretability of Transformers and report encouraging results. Attention roll-out and attention flow methods were proposed in [268] to estimate the accurate attentions. However, this method functions in an ad-hoc manner and makes simplistic assumptions _e.g._ , input tokens are linearly combined using attention weights across the layers. Chefer et al. [269] note that the attention scores obtained directly via the self- attention process (encoding relationships between tokens) or reassignments in [268] do not provide an optimal solution. As an alternative, they propose to assign and propagate _relevancy scores_ in the Transformer network such that the sum of relevancy is constant throughout the network. Their design can handle both the positive and negative attributions experienced in the self- attention layer. The proposed framework has an added advantage of being able to provide class-specific visualizations. Despite these seminal works, visualizing and interpreting Transformers is an unsolved problem and methods are needed to obtain spatially precise activation-specific visualizations. Further progress in this direction can help in better understanding the Transformer models, diagnosing any erroneous behaviors and biases in the decision process. It can also help us design novel architectures that can help us avoid any biases. ### 4.6 Hardware Efficient Designs Large-scale Transformer networks can have intensive power and computation requirements, hindering their deployment on edge devices and resource- constrained environments such as internet-of-things (IoT) platforms. Some recent efforts have been reported to compress and accelerate NLP models on embedded systems such as FPGAs [270]. Li et al.[270] used an enhanced block- circulant matrix-based representation to compress NLP models and proposed a new Field Programmable Gate Array (FPGA) architecture design to efficiently manage resources for high throughput and low latency. They could achieve 27x, 3x and 81x improvements in performance (throughput measured in FPS), reduced power consumption, and energy efficiency relative a CPU for RoBERTa model [7]. Towards this goal, [262] proposed to design Hardware-Aware Transformers (HAT) using neural architecture search strategies [271, 272, 273]. Specifically, a SuperTransformer model is first trained for performance approximation which can estimate a model’s performance without fully training it. This model comprises the largest possible model in the search space while sharing weights between common parts. Eventually, an evolutionary search is performed considering the hardware latency constraints to find a suitable SubTransformer model for a target hardware platform (_e.g._ , IoT device, GPU, CPU). However, such hardware efficient designs are currently lacking for the vision Transformers to enable their seamless deployment in resource-constrained devices. Further, the search cost of the evolutionary algorithms remains significant with the associated impact of CO2 emissions on the environment. ### 4.7 Towards Integrating All Modalities Since Transformers provide a unified design to process different modalities, recent efforts also focus on proposing more generic general purpose reasoning systems based on Transformers. Inspired by the biological systems that can process information from a diverse range of modalities, Perceiver model [274] aims to learn a unified model that can process any given input modality without making domain-specific architectural assumptions. In order to scale to high-dimensional inputs, Perceiver uses an asymmetric cross attention method to distill input information into low-dimensional latent bottleneck features. Once the features are distilled in a compact and fixed-dimensional form, regular Transformer blocks are applied in the latent space. The original Perceiver model shows performance competitive to ResNets and ViTs on image classification and can process 3D data, audio, images, video or their combinations. However, this model can only generate fixed outputs e.g., class probabilities. A recent improvement called Perceiver IO [275] aims to learn models with both flexible inputs as well as arbitrary sized outputs. This allows application to problems which demand structured outputs such as natural language tasks and visual comprehension. While these models avoid modality dependent architectural choices, the learning itself still involves modality dependent choices e.g., specific augmentations or positional encodings. An interesting and open future direction is to achieve total modality-agnosticism in the learning pipeline. ## 5 Conclusion Attention has played a key role in delivering efficient and accurate computer vision systems, while simultaneously providing insights into the function of deep neural networks. This survey reviews the self-attention approaches and specifically focuses on the Transformer and bi-directional encoding architectures that are built on the principle of self-attention. We first cover fundamental concepts pertaining to self-attention architectures and later provide an in-depth analysis of competing approaches for a broad range of computer vision applications. Specifically, we include state of the art self-attention models for image recognition, object detection, semantic and instance segmentation, video analysis and classification, visual question answering, visual commonsense reasoning, image captioning, vision-language navigation, clustering, few-shot learning, and 3D data analysis. We systematically highlight the key strengths and limitations of the existing methods and particularly elaborate on the important future research directions. With its specific focus on computer vision tasks, this survey provides a unique view of the recent progress in self-attention and Transformer-based methods. We hope this effort will drive further interest in the vision community to leverage the potential of Transformer models and improve on their current limitations _e.g._ , reducing their carbon footprint. ## Acknowledgments The authors would like to thank Tim Prangemeier (TU Darmstadt), Luowei Zhou (Microsoft Research), Jason Corso (University of Michigan), Pichao Wang (Alibaba Group), Yuqing Wang (Meituan), Alex Meinke (Uni-Tuebingen), Irwan Bello (Google Brain) and Manoj Kumar (Google Brain) for their helpful feedback on the survey. We would also like to thank Mohamed Afham for his help with a figure. ## References * [1] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” in NeurIPS, 2017. * [2] M. Ott, S. Edunov, D. Grangier, and M. Auli, “Scaling neural machine translation,” in WMT, 2018. * [3] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “BERT: Pre-training of deep bidirectional transformers for language understanding,” arXiv preprint arXiv:1810.04805, 2018. * [4] A. Radford, K. Narasimhan, T. Salimans, and I. Sutskever, “Improving language understanding by generative pre-training,” tech. rep., OpenAI, 2018. * [5] A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, and I. Sutskever, “Language models are unsupervised multitask learners,” tech. rep., OpenAI, 2019. * [6] T. B. Brown, B. Mann, N. Ryder, M. Subbiah, J. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, et al., “Language models are few-shot learners,” arXiv preprint arXiv:2005.14165, 2020. * [7] Y. Liu, M. Ott, N. Goyal, J. Du, M. Joshi, D. Chen, O. Levy, M. Lewis, L. Zettlemoyer, and V. Stoyanov, “RoBERTa: A robustly optimized bert pretraining approach,” arXiv preprint arXiv:1907.11692, 2019. * [8] C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y. Zhou, W. Li, and P. J. Liu, “Exploring the limits of transfer learning with a unified text-to-text transformer,” arXiv preprint arXiv:1910.10683, 2019\. * [9] D. Lepikhin, H. Lee, Y. Xu, D. Chen, O. Firat, Y. Huang, M. Krikun, N. Shazeer, and Z. Chen, “Gshard: Scaling giant models with conditional computation and automatic sharding,” arXiv preprint arXiv:2006.16668, 2020. * [10] W. Fedus, B. Zoph, and N. Shazeer, “Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity,” arXiv preprint arXiv:2101.03961. * [11] A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, et al., “An image is worth 16x16 words: Transformers for image recognition at scale,” arXiv preprint arXiv:2010.11929, 2020. * [12] H. Touvron, M. Cord, M. Douze, F. Massa, A. Sablayrolles, and H. Jégou, “Training data-efficient image transformers & distillation through attention,” arXiv preprint arXiv:2012.12877, 2020. * [13] N. Carion, F. Massa, G. Synnaeve, N. Usunier, A. Kirillov, and S. Zagoruyko, “End-to-end object detection with transformers,” arXiv preprint arXiv:2005.12872, 2020. * [14] X. Zhu, W. Su, L. Lu, B. Li, X. Wang, and J. Dai, “Deformable DETR: Deformable transformers for end-to-end object detection,” arXiv preprint arXiv:2010.04159, 2020. * [15] L. Ye, M. Rochan, Z. Liu, and Y. Wang, “Cross-modal self-attention network for referring image segmentation,” in CVPR, 2019. * [16] F. Yang, H. Yang, J. Fu, H. Lu, and B. Guo, “Learning texture transformer network for image super-resolution,” in CVPR, 2020. * [17] C. Sun, A. Myers, C. Vondrick, K. Murphy, and C. Schmid, “VideoBERT: A joint model for video and language representation learning,” in ICCV, 2019. * [18] R. Girdhar, J. Carreira, C. Doersch, and A. Zisserman, “Video action transformer network,” in CVPR, 2019. * [19] H. Chen, Y. Wang, T. Guo, C. Xu, Y. Deng, Z. Liu, S. Ma, C. Xu, C. Xu, and W. Gao, “Pre-trained image processing transformer,” arXiv preprint arXiv:2012.00364, 2020. * [20] A. Ramesh, M. Pavlov, G. Goh, and S. Gray, “DALL·E: Creating images from text,” tech. rep., OpenAI, 2021. * [21] H. Tan and M. Bansal, “LXMERT: Learning cross-modality encoder representations from transformers,” in EMNLP-IJCNLP, 2019. * [22] W. Su, X. Zhu, Y. Cao, B. Li, L. Lu, F. Wei, and J. Dai, “VL-BERT: Pre-training of generic visual-linguistic representations,” arXiv preprint arXiv:1908.08530, 2019. * [23] X. Wang, C. Yeshwanth, and M. Nießner, “SceneFormer: Indoor scene generation with transformers,” arXiv preprint arXiv:2012.09793, 2020. * [24] M. Kumar, D. Weissenborn, and N. Kalchbrenner, “Colorization transformer,” in ICLR, 2021. * [25] C. Doersch, A. Gupta, and A. Zisserman, “CrossTransformers: spatially-aware few-shot transfer,” NeurIPS, 2020. * [26] H.-J. Ye, H. Hu, D.-C. Zhan, and F. Sha, “Few-shot learning via embedding adaptation with set-to-set functions,” in CVPR, 2020. * [27] S. Chaudhari, G. Polatkan, R. Ramanath, and V. Mithal, “An attentive survey of attention models,” arXiv preprint arXiv:1904.02874, 2019. * [28] A. de Santana Correia and E. L. Colombini, “Attention, please! asurvey of neural attention models in deep learning,” arXiv preprint arXiv:2103.16775, 2021. * [29] O. Vinyals, A. Toshev, S. Bengio, and D. Erhan, “Show and tell: A neural image caption generator,” in CVPR, 2015. * [30] Y. Bengio, I. Goodfellow, and A. Courville, Deep learning. MIT press, 2017. * [31] Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, 2015. * [32] S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural computation, 1997. * [33] D. Hu, “An introductory survey on attention mechanisms in nlp problems,” in IntelliSys, 2019. * [34] Y. Tay, M. Dehghani, D. Bahri, and D. Metzler, “Efficient transformers: A survey,” arXiv preprint arXiv:2009.06732, 2020. * [35] L. Yuan, Y. Chen, T. Wang, W. Yu, Y. Shi, F. E. Tay, J. Feng, and S. Yan, “Tokens-to-token vit: Training vision transformers from scratch on imagenet,” arXiv preprint arXiv:2101.11986, 2021. * [36] Z. Liu, Y. Lin, Y. Cao, H. Hu, Y. Wei, Z. Zhang, S. Lin, and B. Guo, “Swin transformer: Hierarchical vision transformer using shifted windows,” arXiv preprint arXiv:2103.14030, 2021. * [37] X. Chu, Z. Tian, Y. Wang, B. Zhang, H. Ren, X. Wei, H. Xia, and C. Shen, “Twins: Revisiting the design of spatial attention in vision transformers,” arXiv preprint arXiv:2104.13840, 2021. * [38] S. Wang, B. Li, M. Khabsa, H. Fang, and H. Ma, “Linformer: Self-attention with linear complexity,” arXiv preprint arXiv:2006.04768, 2020. * [39] H. Zhang, I. Goodfellow, D. Metaxas, and A. Odena, “Self-attention generative adversarial networks,” in International conference on machine learning, pp. 7354–7363, PMLR, 2019. * [40] J. Pérez, J. Marinković, and P. Barceló, “On the turing completeness of modern neural network architectures,” in International Conference on Learning Representations, 2018. * [41] J.-B. Cordonnier, A. Loukas, and M. Jaggi, “On the relationship between self-attention and convolutional layers,” in International Conference on Learning Representations, 2019. * [42] J. Dai, H. Qi, Y. Xiong, Y. Li, G. Zhang, H. Hu, and Y. Wei, “Deformable convolutional networks,” in Proceedings of the IEEE international conference on computer vision, pp. 764–773, 2017. * [43] Y.-C. Chen, L. Li, L. Yu, A. El Kholy, F. Ahmed, Z. Gan, Y. Cheng, and J. Liu, “UNITER: Universal image-text representation learning,” in ECCV, 2020\. * [44] X. Li, X. Yin, C. Li, P. Zhang, X. Hu, L. Zhang, L. Wang, H. Hu, L. Dong, F. Wei, et al., “Oscar: Object-semantics aligned pre-training for vision-language tasks,” in ECCV, 2020. * [45] K. Lin, L. Wang, and Z. Liu, “End-to-end human pose and mesh reconstruction with transformers,” arXiv preprint arXiv:2012.09760, 2020. * [46] S. Gidaris, P. Singh, and N. Komodakis, “Unsupervised representation learning by predicting image rotations,” in ICLR, 2018. * [47] “Revisiting the unreasonable effectiveness of data.” https://ai.googleblog.com/2017/07/revisiting-unreasonable-effectiveness.html. Accessed: 2020-12-31. * [48] L. Jing and Y. Tian, “Self-supervised visual feature learning with deep neural networks: A survey,” TPAMI, 2020. * [49] X. Liu, F. Zhang, Z. Hou, Z. Wang, L. Mian, J. Zhang, and J. Tang, “Self-supervised learning: Generative or contrastive,” arXiv preprint arXiv:2006.08218, 2020. * [50] “Aaai 2020 keynotes turing award winners event.” https://www.youtube.com/watch?v=UX8OubxsY8w. Accessed: 2020-12-31. * [51] R. Zhang, P. Isola, and A. A. Efros, “Colorful image colorization,” in ECCV, 2016. * [52] C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang, et al., “Photo-realistic single image super-resolution using a generative adversarial network,” in CVPR, 2017. * [53] D. Pathak, P. Krahenbuhl, J. Donahue, T. Darrell, and A. Efros, “Context encoders: Feature learning by inpainting,” in CVPR, 2016. * [54] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in NeurIPS, 2014. * [55] D. Lin, K. Fu, Y. Wang, G. Xu, and X. Sun, “MARTA GANs: Unsupervised representation learning for remote sensing image classification,” GRSL, 2017. * [56] U. Ahsan, R. Madhok, and I. Essa, “Video jigsaw: Unsupervised learning of spatiotemporal context for video action recognition,” in WACV, 2019. * [57] M. Noroozi and P. Favaro, “Unsupervised learning of visual representations by solving jigsaw puzzles,” in ECCV, 2016. * [58] D. Kim, D. Cho, D. Yoo, and I. S. Kweon, “Learning image representations by completing damaged jigsaw puzzles,” WACV, 2018. * [59] L. Jing, X. Yang, J. Liu, and Y. Tian, “Self-supervised spatiotemporal feature learning via video rotation prediction,” arXiv preprint arXiv:1811.11387, 2018. * [60] H.-Y. Lee, J.-B. Huang, M. Singh, and M.-H. Yang, “Unsupervised representation learning by sorting sequences,” in ICCV, 2017. * [61] I. Misra, C. L. Zitnick, and M. Hebert, “Shuffle and learn: unsupervised learning using temporal order verification,” in ECCV, 2016. * [62] D. Wei, J. J. Lim, A. Zisserman, and W. T. Freeman, “Learning and using the arrow of time,” in CVPR, 2018. * [63] L. H. Li, M. Yatskar, D. Yin, C.-J. Hsieh, and K.-W. Chang, “VisualBERT: A simple and performant baseline for vision and language,” in Arxiv preprint arXiv:1908.03557, 2019. * [64] B. Korbar, D. Tran, and L. T., “Cooperative learning of audio and video models from self-supervised synchronization,” in NeurIPS, 2018. * [65] R. Arandjelovic and A. Zisserman, “Look, listen and learn,” in ICCV, 2017\. * [66] N. Sayed, B. Brattoli, and B. Ommer, “Cross and learn: Cross-modal self-supervision,” in GCPR, 2018. * [67] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in CVPR, 2016. * [68] J. L. Ba, J. R. Kiros, and G. E. Hinton, “Layer normalization,” arXiv preprint arXiv:1607.06450, 2016. * [69] A. Buades, B. Coll, and J.-M. Morel, “A non-local algorithm for image denoising,” in CVPR, 2005. * [70] X. Wang, R. Girshick, A. Gupta, and K. He, “Non-local neural networks,” in CVPR, 2018. * [71] W. Kay, J. Carreira, K. Simonyan, B. Zhang, C. Hillier, S. Vijayanarasimhan, F. Viola, T. Green, T. Back, P. Natsev, et al., “The kinetics human action video dataset,” arXiv preprint arXiv:1705.06950, 2017. * [72] Z. Huang, X. Wang, L. Huang, C. Huang, Y. Wei, and W. Liu, “CCNet: Criss-cross attention for semantic segmentation,” in ICCV, 2019. * [73] M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson, U. Franke, S. Roth, and B. Schiele, “The cityscapes dataset for semantic urban scene understanding,” in CVPR, 2016. * [74] B. Zhou, H. Zhao, X. Puig, S. Fidler, A. Barriuso, and A. Torralba, “Scene parsing through ade20k dataset,” in CVPR, 2017. * [75] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick, “Microsoft COCO: Common objects in context,” in ECCV, 2014. * [76] X. Liang, K. Gong, X. Shen, and L. Lin, “Look into person: Joint body parsing & pose estimation network and a new benchmark,” TPAMI, 2018. * [77] G. J. Brostow, J. Fauqueur, and R. Cipolla, “Semantic object classes in video: A high-definition ground truth database,” Pattern Recognition Letters, 2009\. * [78] H. Hu, Z. Zhang, Z. Xie, and S. Lin, “Local relation networks for image recognition,” in ICCV, 2019. * [79] I. Bello, B. Zoph, A. Vaswani, J. Shlens, and Q. V. Le, “Attention augmented convolutional networks,” in ICCV, 2019. * [80] P. Shaw, J. Uszkoreit, and A. Vaswani, “Self-attention with relative position representations,” in NAACL, 2018. * [81] N. Parmar, P. Ramachandran, A. Vaswani, I. Bello, A. Levskaya, and J. Shlens, “Stand-alone self-attention in vision models,” in NeurIPS, 2019. * [82] H. Zhao, J. Jia, and V. Koltun, “Exploring self-attention for image recognition,” in CVPR, 2020. * [83] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus, “Intriguing properties of neural networks,” arXiv preprint arXiv:1312.6199, 2013. * [84] M. M. Naseer, S. H. Khan, M. H. Khan, F. S. Khan, and F. Porikli, “Cross-domain transferability of adversarial perturbations,” in NeurIPS, 2019. * [85] M. Naseer, K. Ranasinghe, S. Khan, F. S. Khan, and F. Porikli, “On improving adversarial transferability of vision transformers,” arXiv preprint arXiv:2106.04169, 2021. * [86] I. Radosavovic, R. P. Kosaraju, R. Girshick, K. He, and P. Dollár, “Designing network design spaces,” in CVPR, 2020. * [87] M. Tan and Q. V. Le, “EfficientNet: Rethinking model scaling for convolutional neural networks,” in ICML, 2019. * [88] K. Han, A. Xiao, E. Wu, J. Guo, C. Xu, and Y. Wang, “Transformer in transformer,” arXiv preprint arXiv:2103.00112, 2021. * [89] Z. Jiang, Q. Hou, L. Yuan, D. Zhou, Y. Shi, X. Jin, A. Wang, and J. Feng, “All tokens matter: Token labeling for training better vision transformers,” arXiv preprint arXiv:2104.10858, 2021. * [90] S. Yun, D. Han, S. J. Oh, S. Chun, J. Choe, and Y. Yoo, “Cutmix: Regularization strategy to train strong classifiers with localizable features,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 6023–6032, 2019. * [91] A. El-Nouby, H. Touvron, M. Caron, P. Bojanowski, M. Douze, A. Joulin, I. Laptev, N. Neverova, G. Synnaeve, J. Verbeek, and H. Jegou, “Xcit: Cross-covariance image transformers,” 2021. * [92] D. Zhou, B. Kang, X. Jin, L. Yang, X. Lian, Z. Jiang, Q. Hou, and J. Feng, “Deepvit: Towards deeper vision transformer,” 2021. * [93] W. Wang, E. Xie, X. Li, D.-P. Fan, K. Song, D. Liang, T. Lu, P. Luo, and L. Shao, “Pyramid vision transformer: A versatile backbone for dense prediction without convolutions,” arXiv preprint arXiv:2102.12122, 2021\. * [94] J. Yang, C. Li, P. Zhang, X. Dai, B. Xiao, L. Yuan, and J. Gao, “Focal self-attention for local-global interactions in vision transformers,” 2021. * [95] Z. Huang, Y. Ben, G. Luo, P. Cheng, G. Yu, and B. Fu, “Shuffle transformer: Rethinking spatial shuffle for vision transformer,” 2021. * [96] H. Wu, B. Xiao, N. Codella, M. Liu, X. Dai, L. Yuan, and L. Zhang, “Cvt: Introducing convolutions to vision transformers,” arXiv preprint arXiv:2103.15808, 2021. * [97] W. Wang, E. Xie, X. Li, D.-P. Fan, K. Song, D. Liang, T. Lu, P. Luo, and L. Shao, “Pvtv2: Improved baselines with pyramid vision transformer,” 2021. * [98] W. Xu, Y. Xu, T. Chang, and Z. Tu, “Co-scale conv-attentional image transformers,” 2021. * [99] W. Wang, L. Yao, L. Chen, D. Cai, X. He, and W. Liu, “Crossformer: A versatile vision transformer based on cross-scale attention,” arXiv preprint arXiv:2108.00154, 2021. * [100] C.-F. Chen, R. Panda, and Q. Fan, “Regionvit: Regional-to-local attention for vision transformers,” 2021. * [101] E. Xie, W. Wang, Z. Yu, A. Anandkumar, J. M. Alvarez, and P. Luo, “Segformer: Simple and efficient design for semantic segmentation with transformers,” 2021\. * [102] P. Zhang, X. Dai, J. Yang, B. Xiao, L. Yuan, L. Zhang, and J. Gao, “Multi-scale vision longformer: A new vision transformer for high-resolution image encoding,” ICCV 2021, 2021. * [103] I. Beltagy, M. E. Peters, and A. Cohan, “Longformer: The long-document transformer,” arXiv preprint arXiv:2004.05150, 2020. * [104] C.-F. Chen, Q. Fan, and R. Panda, “Crossvit: Cross-attention multi-scale vision transformer for image classification,” arXiv preprint arXiv:2103.14899, 2021. * [105] K. Yuan, S. Guo, Z. Liu, A. Zhou, F. Yu, and W. Wu, “Incorporating convolution designs into visual transformers,” arXiv preprint arXiv:2103.11816, 2021\. * [106] A. Hassani, S. Walton, N. Shah, A. Abuduweili, J. Li, and H. Shi, “Escaping the big data paradigm with compact transformers,” 2021. * [107] Y. Li, K. Zhang, J. Cao, R. Timofte, and L. V. Gool, “Localvit: Bringing locality to vision transformers,” 2021. * [108] B. Graham, A. El-Nouby, H. Touvron, P. Stock, A. Joulin, H. Jégou, and M. Douze, “Levit: a vision transformer in convnet’s clothing for faster inference,” 2021. * [109] Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel, “Backpropagation applied to handwritten zip code recognition,” Neural computation, vol. 1, no. 4, pp. 541–551, 1989. * [110] Q. Zhang and Y. Yang, “Rest: An efficient transformer for visual recognition,” arXiv preprint arXiv:2105.13677, 2021. * [111] Z. Zhang, H. Zhang, L. Zhao, T. Chen, and T. Pfister, “Aggregating nested transformers,” in arXiv preprint arXiv:2105.12723, 2021. * [112] Z. Dai, H. Liu, Q. V. Le, and M. Tan, “Coatnet: Marrying convolution and attention for all data sizes,” 2021. * [113] X. Chu, Z. Tian, B. Zhang, X. Wang, X. Wei, H. Xia, and C. Shen, “Conditional positional encodings for vision transformers,” 2021. * [114] Y. Liu, G. Sun, Y. Qiu, L. Zhang, A. Chhatkuli, and L. Van Gool, “Transformer in convolutional neural networks,” arXiv preprint arXiv:2106.03180, 2021\. * [115] X. Chen, S. Xie, and K. He, “An empirical study of training self-supervised visual transformers,” arXiv e-prints, pp. arXiv–2104, 2021. * [116] X. Chen, H. Fan, R. Girshick, and K. He, “Improved baselines with momentum contrastive learning,” arXiv preprint arXiv:2003.04297, 2020. * [117] K. He, H. Fan, Y. Wu, S. Xie, and R. Girshick, “Momentum contrast for unsupervised visual representation learning,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9729–9738, 2020. * [118] Z. Xie, Y. Lin, Z. Yao, Z. Zhang, Q. Dai, Y. Cao, and H. Hu, “Self-supervised learning with swin transformers,” arXiv preprint arXiv:2105.04553, 2021\. * [119] J.-B. Grill, F. Strub, F. Altché, C. Tallec, P. H. Richemond, E. Buchatskaya, C. Doersch, B. A. Pires, Z. D. Guo, M. G. Azar, et al., “Bootstrap your own latent: A new approach to self-supervised learning,” arXiv preprint arXiv:2006.07733, 2020. * [120] M. Caron, H. Touvron, I. Misra, H. Jégou, J. Mairal, P. Bojanowski, and A. Joulin, “Emerging properties in self-supervised vision transformers,” arXiv preprint arXiv:2104.14294, 2021. * [121] C. Li, J. Yang, P. Zhang, M. Gao, B. Xiao, X. Dai, L. Yuan, and J. Gao, “Efficient self-supervised vision transformers for representation learning,” arXiv preprint arXiv:2106.09785, 2021. * [122] Y. Wang, X. Zhang, T. Yang, and J. Sun, “Anchor detr: Query design for transformer-based detector,” 2021. * [123] T. Chen, S. Saxena, L. Li, D. J. Fleet, and G. Hinton, “Pix2seq: A language modeling framework for object detection,” 2021. * [124] Y. Fang, B. Liao, X. Wang, J. Fang, J. Qi, R. Wu, J. Niu, and W. Liu, “You only look at one sequence: Rethinking transformer in vision through object detection,” 2021. * [125] S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards real-time object detection with region proposal networks,” TPAMI, 2016. * [126] R. Girshick, “Fast R-CNN,” in ICCV, 2015. * [127] K. He, G. Gkioxari, P. Dollár, and R. Girshick, “Mask R-CNN,” in ICCV, 2017. * [128] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” in CVPR, 2016. * [129] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu, and A. C. Berg, “SSD: Single shot multibox detector,” in ECCV, 2016. * [130] T. Prangemeier, C. Reich, and H. Koeppl, “Attention-based transformers for instance segmentation of cells in microstructures,” in 2020 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), pp. 700–707, IEEE, 2020. * [131] T.-Y. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, and S. Belongie, “Feature pyramid networks for object detection,” in CVPR, 2017. * [132] A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, J. Uszkoreit, and N. Houlsby, “An image is worth 16x16 words: Transformers for image recognition at scale,” 2020. * [133] H. Wang, Y. Zhu, B. Green, H. Adam, A. Yuille, and L.-C. Chen, “Axial-DeepLab: Stand-alone axial-attention for panoptic segmentation,” arXiv preprint arXiv:2003.07853, 2020. * [134] S. Zheng, J. Lu, H. Zhao, X. Zhu, Z. Luo, Y. Wang, Y. Fu, J. Feng, T. Xiang, P. H. S. Torr, and L. Zhang, “Rethinking semantic segmentation from a sequence-to-sequence perspective with transformers,” 2021. * [135] R. Strudel, R. Garcia, I. Laptev, and C. Schmid, “Segmenter: Transformer for semantic segmentation,” 2021. * [136] A. Kirillov, K. He, R. Girshick, C. Rother, and P. Dollár, “Panoptic segmentation,” in CVPR, 2019. * [137] G. Neuhold, T. Ollmann, S. Rota Bulo, and P. Kontschieder, “The mapillary vistas dataset for semantic understanding of street scenes,” in ICCV, 2017\. * [138] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “ImageNet: A large-scale hierarchical image database,” in CVPR, 2009. * [139] L. Yu, P. Poirson, S. Yang, A. C. Berg, and T. L. Berg, “Modeling context in referring expressions,” in ECCV, 2016. * [140] J. Mao, J. Huang, A. Toshev, O. Camburu, A. L. Yuille, and K. Murphy, “Generation and comprehension of unambiguous object descriptions,” in CVPR, 2016. * [141] S. Kazemzadeh, V. Ordonez, M. Matten, and T. Berg, “Referitgame: Referring to objects in photographs of natural scenes,” in EMNLP, 2014. * [142] N. Parmar, A. Vaswani, J. Uszkoreit, Ł. Kaiser, N. Shazeer, A. Ku, and D. Tran, “Image transformer,” in ICML, 2018. * [143] M. Chen, A. Radford, R. Child, J. Wu, H. Jun, D. Luan, and I. Sutskever, “Generative pretraining from pixels,” in ICML, 2020. * [144] P. Esser, R. Rombach, and B. Ommer, “Taming transformers for high-resolution image synthesis,” arXiv:2012.09841, 2020. * [145] Y. Jiang, S. Chang, and Z. Wang, “Transgan: Two transformers can make one strong gan,” 2021. * [146] A. K. Bhunia, S. Khan, H. Cholakkal, R. M. Anwer, F. S. Khan, and M. Shah, “Handwriting transformers,” arXiv preprint arXiv:2104.03964, 2021. * [147] A. Van den Oord, N. Kalchbrenner, L. Espeholt, O. Vinyals, A. Graves, et al., “Conditional image generation with pixelcnn decoders,” in NeurIPS, 2016. * [148] A. Krizhevsky, “Learning multiple layers of features from tiny images,” tech. rep., 2009. * [149] A. Coates, A. Ng, and H. Lee, “An analysis of single-layer networks in unsupervised feature learning,” in AISTATS, 2011. * [150] T. Chen, S. Kornblith, M. Norouzi, and G. Hinton, “A simple framework for contrastive learning of visual representations,” arXiv preprint arXiv:2002.05709, 2020. * [151] P. Bachman, R. Hjelm, and W. Buchwalter, “Learning representations by maximizing mutual information across views,” in NeurIPS, 2019. * [152] O. J. Hénaff, A. Srinivas, J. De Fauw, A. Razavi, C. Doersch, S. Eslami, and A. v. d. Oord, “Data-efficient image recognition with contrastive predictive coding,” arXiv preprint arXiv:1905.09272, 2019. * [153] Y. Tian, D. Krishnan, and P. Isola, “Contrastive multiview coding,” arXiv preprint arXiv:1906.05849, 2019. * [154] S. Khan, H. Rahmani, S. A. A. Shah, and M. Bennamoun, “A guide to convolutional neural networks for computer vision,” Synthesis Lectures on Computer Vision, 2018. * [155] A. Radford, L. Metz, and S. Chintala, “Unsupervised representation learning with deep convolutional generative adversarial networks,” arXiv preprint arXiv:1511.06434, 2015. * [156] C. Gao, Y. Chen, S. Liu, Z. Tan, and S. Yan, “Adversarialnas: Adversarial neural architecture search for gans,” in CVPR, pp. 5680–5689, 2020. * [157] T. Karras, S. Laine, M. Aittala, J. Hellsten, J. Lehtinen, and T. Aila, “Analyzing and improving the image quality of stylegan,” in CVPR, pp. 8110–8119, 2020. * [158] S. Reed, Z. Akata, X. Yan, L. Logeswaran, B. Schiele, and H. Lee, “Generative adversarial text to image synthesis,” in ICML, 2016. * [159] H. Zhang, T. Xu, H. Li, S. Zhang, X. Wang, X. Huang, and D. N. Metaxas, “StackGAN: Text to photo-realistic image synthesis with stacked generative adversarial networks,” in ICCV, 2017. * [160] H. Zhang, T. Xu, H. Li, S. Zhang, X. Wang, X. Huang, and D. N. Metaxas, “StackGAN++: Realistic image synthesis with stacked generative adversarial networks,” TPAMI, 2018. * [161] T. Xu, P. Zhang, Q. Huang, H. Zhang, Z. Gan, X. Huang, and X. He, “AttnGAN: Fine-grained text to image generation with attentional generative adversarial networks,” in CVPR, 2018. * [162] D. P. Kingma and M. Welling, “Auto-encoding variational bayes,” arXiv preprint arXiv:1312.6114, 2013. * [163] A. Razavi, A. van den Oord, and O. Vinyals, “Generating diverse high-fidelity images with vq-vae-2,” in NeurISP, 2019. * [164] J. Liang, J. Cao, G. Sun, K. Zhang, L. Van Gool, and R. Timofte, “Swinir: Image restoration using swin transformer,” in ICCVW, 2021. * [165] Z. Wang, X. Cun, J. Bao, and J. Liu, “Uformer: A general u-shaped transformer for image restoration,” arXiv preprint arXiv:2106.03106, 2021. * [166] Z. Lu, H. Liu, J. Li, and L. Zhang, “Efficient transformer for single image super-resolution,” arXiv preprint arXiv:2108.11084, 2021. * [167] Y. Zhang, K. Li, K. Li, L. Wang, B. Zhong, and Y. Fu, “Image super-resolution using very deep residual channel attention networks,” in ECCV, 2018. * [168] T. Dai, J. Cai, Y. Zhang, S. Xia, and L. Zhang, “Second-order attention network for single image super-resolution,” in CVPR, 2019. * [169] B. Niu, W. Wen, W. Ren, X. Zhang, L. Yang, S. Wang, K. Zhang, X. Cao, and H. Shen, “Single image super-resolution via a holistic attention network,” in ECCV, 2020. * [170] B. Lim, S. Son, H. Kim, S. Nah, and K. Mu Lee, “Enhanced deep residual networks for single image super-resolution,” in CVPRW, 2017. * [171] Y. Tai, J. Yang, and X. Liu, “Image super-resolution via deep recursive residual network,” in CVPR, 2017. * [172] W. Han, S. Chang, D. Liu, M. Yu, M. Witbrock, and T. Huang, “Image super-resolution via dual-state recurrent networks,” in CVPR, 2018. * [173] Y. Zhang, Y. Tian, Y. Kong, B. Zhong, and Y. Fu, “Residual dense network for image restoration,” TPAMI, 2020. * [174] X. Wang, K. Yu, S. Wu, J. Gu, Y. Liu, C. Dong, Y. Qiao, and C. Change Loy, “ESRGAN: enhanced super-resolution generative adversarial networks,” in ECCVW, 2018. * [175] S.-J. Park, H. Son, S. Cho, K.-S. Hong, and S. Lee, “SRFEAT: Single image super-resolution with feature discrimination,” in ECCV, 2018. * [176] M. S. Sajjadi, B. Scholkopf, and M. Hirsch, “EnhanceNet: Single image super-resolution through automated texture synthesis,” in ICCV, 2017. * [177] C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang, et al., “Photo-realistic single image super-resolution using a generative adversarial network,” in CVPR, 2017. * [178] J. Johnson, A. Alahi, and L. Fei-Fei, “Perceptual losses for real-time style transfer and super-resolution,” in ECCV, 2016. * [179] J. Ho, N. Kalchbrenner, D. Weissenborn, and T. Salimans, “Axial attention in multidimensional transformers,” arXiv preprint arXiv:1912.12180, 2019. * [180] G. Li, N. Duan, Y. Fang, M. Gong, D. Jiang, and M. Zhou, “Unicoder-VL: A universal encoder for vision and language by cross-modal pre-training.,” in AAAI, 2020. * [181] J. Lu, D. Batra, D. Parikh, and S. Lee, “Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks,” in NeurIPS, 2019. * [182] S. Lee, Y. Yu, G. Kim, T. Breuel, J. Kautz, and Y. Song, “Parameter efficient multimodal transformers for video representation learning,” arXiv preprint arXiv:2012.04124, 2020. * [183] S. Antol, A. Agrawal, J. Lu, M. Mitchell, D. Batra, C. Lawrence Zitnick, and D. Parikh, “VQA: Visual question answering,” in ICCV, 2015. * [184] R. Zellers, Y. Bisk, A. Farhadi, and Y. Choi, “From recognition to cognition: Visual commonsense reasoning,” in CVPR, 2019. * [185] K.-H. Lee, X. Chen, G. Hua, H. Hu, and X. He, “Stacked cross attention for image-text matching,” in ECCV, 2018. * [186] A. Suhr, S. Zhou, A. Zhang, I. Zhang, H. Bai, and Y. Artzi, “A corpus for reasoning about natural language grounded in photographs,” arXiv preprint arXiv:1811.00491, 2018. * [187] J. Carreira, E. Noland, C. Hillier, and A. Zisserman, “A short note on the kinetics-700 human action dataset,” arXiv:1907.06987, 2019. * [188] K. Soomro, A. R. Zamir, and M. Shah, “UCF101: A dataset of 101 human actions classes from videos in the wild,” arXiv preprint arXiv:1212.0402, 2012\. * [189] J. F. Gemmeke, D. P. Ellis, D. Freedman, A. Jansen, W. Lawrence, R. C. Moore, M. Plakal, and M. Ritter, “Audio set: An ontology and human-labeled dataset for audio events,” in ICASSP, 2017. * [190] G. A. Sigurdsson, G. Varol, X. Wang, A. Farhadi, I. Laptev, and A. Gupta, “Hollywood in homes: Crowdsourcing data collection for activity understanding,” in ECCV, 2016. * [191] H. Tan and M. Bansal, “Vokenization: Improving language understanding with contextualized, visual-grounded supervision,” in EMNLP, 2020. * [192] W. Hao, C. Li, X. Li, L. Carin, and J. Gao, “Towards learning a generic agent for vision-and-language navigation via pre-training,” in CVPR, 2020. * [193] A. Majumdar, A. Shrivastava, S. Lee, P. Anderson, D. Parikh, and D. Batra, “Improving vision-and-language navigation with image-text pairs from the web,” arXiv preprint arXiv:2004.14973, 2020. * [194] K. Chen, J. K. Chen, J. Chuang, M. Vázquez, and S. Savarese, “Topological planning with transformers for vision-and-language navigation,” arXiv preprint arXiv:2012.05292, 2020. * [195] A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell, P. Mishkin, J. Clark, et al., “Learning transferable visual models from natural language supervision,” Image, vol. 2, p. T2, 2021. * [196] P. Sharma, N. Ding, S. Goodman, and R. Soricut, “Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning,” in ACL, 2018. * [197] L. Zhou, H. Palangi, L. Zhang, H. Hu, J. Corso, and J. Gao, “Unified vision-language pre-training for image captioning and vqa,” in AAAI, vol. 34, pp. 13041–13049, 2020. * [198] C. Sun, F. Baradel, K. Murphy, and C. Schmid, “Learning video representations using contrastive bidirectional transformer,” arXiv preprint arXiv:1906.05743, 2019. * [199] C. Alberti, J. Ling, M. Collins, and D. Reitter, “Fusion of detected objects in text for visual question answering,” in EMNLP, 2019. * [200] R. Krishna, Y. Zhu, O. Groth, J. Johnson, K. Hata, J. Kravitz, S. Chen, Y. Kalantidis, L.-J. Li, D. A. Shamma, et al., “Visual genome: Connecting language and vision using crowdsourced dense image annotations,” IJCV, 2017. * [201] V. Ordonez, G. Kulkarni, and T. L. Berg, “Im2text: Describing images using 1 million captioned photographs,” in NeurIPS, 2011. * [202] P. Zhang, X. Li, X. Hu, J. Yang, L. Zhang, L. Wang, Y. Choi, and J. Gao, “Vinvl: Revisiting visual representations in vision-language models,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5579–5588, 2021. * [203] A. Kamath, M. Singh, Y. LeCun, I. Misra, G. Synnaeve, and N. Carion, “Mdetr–modulated detection for end-to-end multi-modal understanding,” arXiv preprint arXiv:2104.12763, 2021. * [204] J. Deng, Z. Yang, T. Chen, W. Zhou, and H. Li, “Transvg: End-to-end visual grounding with transformers,” 2021. * [205] M. Li and L. Sigal, “Referring transformer: A one-step approach to multi-task visual grounding,” arXiv preprint arXiv:2106.03089, 2021. * [206] Y. Du, Z. Fu, Q. Liu, and Y. Wang, “Visual grounding with transformers,” arXiv preprint arXiv:2105.04281, 2021. * [207] S. Ging, M. Zolfaghari, H. Pirsiavash, and T. Brox, “COOT: Cooperative hierarchical transformer for video-text representation learning,” arXiv preprint arXiv:2011.00597, 2020. * [208] H. Seong, J. Hyun, and E. Kim, “Video multitask transformer network,” in ICCV Workshops, pp. 0–0, 2019. * [209] Y. Wang, Z. Xu, X. Wang, C. Shen, B. Cheng, H. Shen, and H. Xia, “End-to-end video instance segmentation with transformers,” arXiv preprint arXiv:2011.14503, 2020. * [210] L. Zhou, Y. Zhou, J. Corso, R. Socher, and C. Xiong, “End-to-end dense video captioning with masked transformer,” in CVPR, 2018. * [211] D. Neimark, O. Bar, M. Zohar, and D. Asselmann, “Video transformer network,” arXiv preprint arXiv:2102.00719, 2021. * [212] A. Arnab, M. Dehghani, G. Heigold, C. Sun, M. Lučić, and C. Schmid, “Vivit: A video vision transformer,” arXiv preprint arXiv:2103.15691, 2021\. * [213] G. Bertasius, H. Wang, and L. Torresani, “Is space-time attention all you need for video understanding?,” in Proceedings of the International Conference on Machine Learning (ICML), July 2021. * [214] R. Krishna, K. Hata, F. Ren, L. Fei-Fei, and J. Carlos Niebles, “Dense-captioning events in videos,” in ICCV, pp. 706–715, 2017. * [215] L. Zhou, C. Xu, and J. Corso, “Towards automatic learning of procedures from web instructional videos,” in AAAI, vol. 32, 2018. * [216] C. Plizzari, M. Cannici, and M. Matteucci, “Spatial temporal transformer network for skeleton-based action recognition,” arXiv preprint arXiv:2008.07404, 2020. * [217] A. Shahroudy, J. Liu, T.-T. Ng, and G. Wang, “NTU RGB+D: A large scale dataset for 3d human activity analysis,” in CVPR, 2016. * [218] J. Liu, A. Shahroudy, M. Perez, G. Wang, L.-Y. Duan, and A. C. Kot, “NTU RGB+D 120: A large-scale benchmark for 3d human activity understanding,” TPAMI, 2019. * [219] H. Fan, B. Xiong, K. Mangalam, Y. Li, Z. Yan, J. Malik, and C. Feichtenhofer, “Multiscale vision transformers,” 2021. * [220] J. Wang, G. Bertasius, D. Tran, and L. Torresani, “Long-short temporal contrastive learning of video transformers,” arXiv preprint arXiv:2106.09212, 2021. * [221] L. Yang, Y. Fan, and N. Xu, “Video instance segmentation,” in ICCV, pp. 5188–5197, 2019. * [222] G. Bertasius and L. Torresani, “Classifying, segmenting, and tracking object instances in video with mask propagation,” in CVPR, pp. 9739–9748, 2020\. * [223] E. Triantafillou, T. Zhu, V. Dumoulin, P. Lamblin, U. Evci, K. Xu, R. Goroshin, C. Gelada, K. Swersky, P.-A. Manzagol, et al., “Meta-dataset: A dataset of datasets for learning to learn from few examples,” in ICLR, 2020\. * [224] T. N. Kipf and M. Welling, “Semi-supervised classification with graph convolutional networks,” arXiv preprint arXiv:1609.02907, 2016. * [225] M. Zaheer, S. Kottur, S. Ravanbakhsh, B. Poczos, R. R. Salakhutdinov, and A. J. Smola, “Deep sets,” in NeurIPS, 2017. * [226] L. Liu, W. Hamilton, G. Long, J. Jiang, and H. Larochelle, “A universal representation transformer layer for few-shot image classification,” 2020. * [227] H. Edwards and A. Storkey, “Towards a neural statistician,” arXiv preprint arXiv:1606.02185, 2016. * [228] J. Lee, Y. Lee, J. Kim, A. Kosiorek, S. Choi, and Y. W. Teh, “Set transformer: A framework for attention-based permutation-invariant neural networks,” in ICML, 2019. * [229] J. Lee, Y. Lee, and Y. W. Teh, “Deep amortized clustering,” arXiv preprint arXiv:1909.13433, 2019. * [230] H. Zhao, L. Jiang, J. Jia, P. Torr, and V. Koltun, “Point transformer,” arXiv preprint arXiv:2012.09164, 2020. * [231] M.-H. Guo, J.-X. Cai, Z.-N. Liu, T.-J. Mu, R. R. Martin, and S.-M. Hu, “Pct: Point cloud transformer,” arXiv preprint arXiv:2012.09688, 2020. * [232] Z. Wu, S. Song, A. Khosla, F. Yu, L. Zhang, X. Tang, and J. Xiao, “3D ShapeNets: A deep representation for volumetric shapes,” in CVPR, 2015\. * [233] A. X. Chang, T. Funkhouser, L. Guibas, P. Hanrahan, Q. Huang, Z. Li, S. Savarese, M. Savva, S. Song, H. Su, J. Xiao, L. Yi, and F. Yu, “ShapeNet: An information-rich 3d model repository,” arXiv preprint arXiv:1512.03012, 2015. * [234] C. Ionescu, D. Papava, V. Olaru, and C. Sminchisescu, “Human3.6M: Large scale datasets and predictive methods for 3D human sensing in natural environments,” TPAMI, 2013. * [235] T. von Marcard, R. Henschel, M. J. Black, B. Rosenhahn, and G. Pons-Moll, “Recovering accurate 3d human pose in the wild using imus and a moving camera,” in ECCV, 2018. * [236] C. Zimmermann, D. Ceylan, J. Yang, B. Russell, M. Argus, and T. Brox, “FreiHAND: A dataset for markerless capture of hand pose and shape from single rgb images,” in ICCV, 2019. * [237] “OpenAI’s GPT-3 language model: A technical overview.” https://lambdalabs.com/blog/demystifying-gpt-3/. Accessed: 2020-12-31. * [238] X. Zhai, A. Kolesnikov, N. Houlsby, and L. Beyer, “Scaling vision transformers,” 2021. * [239] P. Young, A. Lai, M. Hodosh, and J. Hockenmaier, “From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions,” TACL, 2014. * [240] Y. Goyal, T. Khot, D. Summers-Stay, D. Batra, and D. Parikh, “Making the v in vqa matter: Elevating the role of image understanding in visual question answering,” in CVPR, 2017. * [241] B. A. Plummer, L. Wang, C. M. Cervantes, J. C. Caicedo, J. Hockenmaier, and S. Lazebnik, “Flickr30k entities: Collecting region-to-phrase correspondences for richer image-to-sentence models,” in ICCV, 2015. * [242] C. R. Qi, L. Yi, H. Su, and L. J. Guibas, “PointNet++: Deep hierarchical feature learning on point sets in a metric space,” NeurIPS, 2017. * [243] H. Touvron, M. Cord, A. Sablayrolles, G. Synnaeve, and H. Jégou, “Going deeper with image transformers,” arXiv preprint arXiv:2103.17239, 2021\. * [244] S. Xie, R. Girshick, P. Dollár, Z. Tu, and K. He, “Aggregated residual transformations for deep neural networks,” in CVPR, 2017. * [245] R. Child, S. Gray, A. Radford, and I. Sutskever, “Generating long sequences with sparse transformers,” arXiv:1904.10509, 2019. * [246] N. Kitaev, Ł. Kaiser, and A. Levskaya, “Reformer: The efficient transformer,” in ICLR, 2020. * [247] I. Bello, “Lambdanetworks: Modeling long-range interactions without attention,” in International Conference on Learning Representations, 2021\. * [248] A. Vyas, A. Katharopoulos, and F. Fleuret, “Fast transformers with clustered attention,” NeurIPS, 2020. * [249] Y.-H. Wu, Y. Liu, X. Zhan, and M.-M. Cheng, “P2t: Pyramid pooling transformer for scene understanding,” arXiv preprint arXiv:2106.12011, 2021. * [250] A. Vaswani, P. Ramachandran, A. Srinivas, N. Parmar, B. Hechtman, and J. Shlens, “Scaling local self-attention for parameter efficient visual backbones,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12894–12904, 2021. * [251] X. Dong, J. Bao, D. Chen, W. Zhang, N. Yu, L. Yuan, D. Chen, and B. Guo, “Cswin transformer: A general vision transformer backbone with cross-shaped windows,” arXiv preprint arXiv:2107.00652, 2021. * [252] Y. Xiong, Z. Zeng, R. Chakraborty, M. Tan, G. Fung, Y. Li, and V. Singh, “Nystr$\backslash$” omformer: A nystr$\backslash$” om-based algorithm for approximating self-attention,” in AAAI, 2021. * [253] Y. Tay, D. Bahri, D. Metzler, D. Juan, Z. Zhao, and C. Zheng, “Synthesizer: Rethinking self-attention in transformer models,” in ICML, 2021. * [254] H. Peng, N. Pappas, D. Yogatama, R. Schwartz, N. A. Smith, and L. Kong, “Random feature attention,” in ICLR, 2021. * [255] K. Choromanski, V. Likhosherstov, D. Dohan, X. Song, A. Gane, T. Sarlos, P. Hawkins, J. Davis, A. Mohiuddin, L. Kaiser, et al., “Rethinking attention with performers,” in ICLR, 2021. * [256] Y. Tay, D. Bahri, L. Yang, D. Metzler, and D.-C. Juan, “Sparse sinkhorn attention,” in ICML, 2020. * [257] X. Chen, C.-J. Hsieh, and B. Gong, “When vision transformers outperform resnets without pretraining or strong data augmentations,” arXiv preprint arXiv:2106.01548, 2021. * [258] P. Foret, A. Kleiner, H. Mobahi, and B. Neyshabur, “Sharpness-aware minimization for efficiently improving generalization,” arXiv preprint arXiv:2010.01412, 2020. * [259] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking the inception architecture for computer vision,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2818–2826, 2016. * [260] S. He, H. Luo, P. Wang, F. Wang, H. Li, and W. Jiang, “Transreid: Transformer-based object re-identification,” arXiv:2102.04378, 2021. * [261] D. R. So, C. Liang, and Q. V. Le, “The evolved transformer,” 2019. * [262] H. Wang, Z. Wu, Z. Liu, H. Cai, L. Zhu, C. Gan, and S. Han, “Hat: Hardware-aware transformers for efficient natural language processing,” 2020\. * [263] M. Chen, H. Peng, J. Fu, and H. Ling, “Autoformer: Searching transformers for visual recognition,” arXiv preprint arXiv:2107.00651, 2021. * [264] C. Li, T. Tang, G. Wang, J. Peng, B. Wang, X. Liang, and X. Chang, “Bossnas: Exploring hybrid cnn-transformers with block-wisely self-supervised neural architecture search,” arXiv preprint arXiv:2103.12424, 2021. * [265] B. Chen, P. Li, C. Li, B. Li, L. Bai, C. Lin, M. Sun, W. Ouyang, et al., “Glit: Neural architecture search for global and local image transformer,” arXiv preprint arXiv:2107.02960, 2021. * [266] M. Naseer, K. Ranasinghe, S. Khan, M. Hayat, F. S. Khan, and M.-H. Yang, “Intriguing properties of vision transformers,” arXiv preprint arXiv:2105.10497, 2021. * [267] E. Voita, D. Talbot, F. Moiseev, R. Sennrich, and I. Titov, “Analyzing multi-head self-attention: Specialized heads do the heavy lifting, the rest can be pruned,” arXiv preprint arXiv:1905.09418, 2019. * [268] S. Abnar and W. Zuidema, “Quantifying attention flow in transformers,” arXiv preprint arXiv:2005.00928, 2020. * [269] H. Chefer, S. Gur, and L. Wolf, “Transformer interpretability beyond attention visualization,” arXiv preprint arXiv:2012.09838, 2020. * [270] B. Li, S. Pandey, H. Fang, Y. Lyv, J. Li, J. Chen, M. Xie, L. Wan, H. Liu, and C. Ding, “FTRANS: energy-efficient acceleration of transformers using fpga,” in ISLPED, 2020. * [271] G. Bender, P.-J. Kindermans, B. Zoph, V. Vasudevan, and Q. Le, “Understanding and simplifying one-shot architecture search,” in ICML, 2018. * [272] Z. Guo, X. Zhang, H. Mu, W. Heng, Z. Liu, Y. Wei, and J. Sun, “Single path one-shot neural architecture search with uniform sampling,” arXiv preprint arXiv:1904.00420, 2019. * [273] H. Pham, M. Y. Guan, B. Zoph, Q. V. Le, and J. Dean, “Efficient neural architecture search via parameter sharing,” in ICML, 2018. * [274] A. Jaegle, F. Gimeno, A. Brock, A. Zisserman, O. Vinyals, and J. Carreira, “Perceiver: General perception with iterative attention,” arXiv preprint arXiv:2103.03206, 2021. * [275] A. Jaegle, S. Borgeaud, J.-B. Alayrac, C. Doersch, C. Ionescu, D. Ding, S. Koppula, D. Zoran, A. Brock, E. Shelhamer, et al., “Perceiver io: A general architecture for structured inputs & outputs,” arXiv preprint arXiv:2107.14795, 2021.
32k
arxiv_papers
2101.01171
††thanks: email for correspondence: [email protected] # Superconductivity from Luttinger surfaces: Emergent $\infty$-body SYK physics Chandan Setty Department of Physics, University of Florida, Gainesville, Florida, USA Department of Physics and Astronomy, Rice University, Houston, Texas, USA ###### Abstract The pairing of two electrons on a Fermi surface due to an infinitesimal attraction between them always results in a superconducting instability at zero temperature ($T=0$). The equivalent question of pairing instability on a Luttinger surface (LS) – a contour of zeros of the propagator – instead leads to a quantum critical point (QCP) that separates a non-Fermi liquid (NFL) and superconductor. A surprising and little understood aspect of pair fluctuations at this QCP is that their thermodynamics maps to that of the Sachdev-Ye-Kitaev (SYK) model in the strong coupling limit. Here, we offer a simple justification for this mapping by demonstrating that (i) LS models share the reparametrization symmetry of the $q\rightarrow\infty$ SYK model with $q$-body interactions close to the LS, and (ii) the enforcement of gauge invariance results in a $\frac{1}{\sqrt{\tau}}$ ($\tau\sim T^{-1}$) behavior of the fluctuation propagator near the QCP, as is a feature of the fundamental SYK fermion. ## I Introduction The theory of superconductivity by Bardeen, Cooper and Schrieffer Bardeen _et al._ (1957) relies on the existence of a Fermi surface (FS)– a contour of poles of the single particle Green function where excitations are long-lived and the notion of a quasiparticle is well-defined. The superconducting phase then follows from a net attractive interaction that pairs two such quasiparticles rendering the FS unstable. The question of whether such an instability can exist, or for that matter be defined, on a Luttinger surface (LS) Abrikosov _et al._ (1965), or a contour of zeros of the propagator due to a divergent self-energy Abrikosov _et al._ (1965); Norman _et al._ (1998); Essler and Tsvelik (2002); Konik _et al._ (2006a); Yang _et al._ (2006); Stanescu and Kotliar (2006); Hong and Phillips (2012); Scheurer _et al._ (2018), is less straightforward. The obstacle to such a generalization stems from a complete breakdown of the quasiparticle concept on a LS – the quasiparticle scattering lifetime is vanishingly small in this limit and the degrees of freedom that constitute a “Cooper pair” are either ill-defined or unknown due to lack of exactly solvable models. Yet, experiments suggest umpteen examples of pairing in highly incoherent matter Taillefer (2010), including the Cuprates where LSs play a prominent role in their normal state phenomenology Norman _et al._ (1998); Timusk and Statt (1999); Yoshida _et al._ (2006); Vishik _et al._ (2010); He _et al._ (2011); Yang _et al._ (2011); Chakravarty (2010); Dzyaloshinskii (1996, 2003); Essler and Tsvelik (2002); Konik _et al._ (2006b); Yang _et al._ (2006); Altshuler _et al._ (1998); Stanescu and Kotliar (2006); Rosch (2007); Dave _et al._ (2013); Berthod _et al._ (2006); Sakai _et al._ (2009); Phillips (2006); Valenzuela and Bascones (2007); Norman _et al._ (2007); Vanacore _et al._ (2018); Stanescu _et al._ (2007); Scheurer _et al._ (2018) In a recent attempt Setty (2020) to address the question above, it was demonstrated phenomenologically that such a pairing instability can indeed exist on a minimal model LS where the self-energy has a simple pole. Unlike the case of a Fermi liquid, the pair susceptibility in this scenario diverges even at zero temperature resulting in a superconductor to non-Fermi liquid (NFL) quantum phase transition above a critical residue of the self-energy pole. In addition to a power-law divergence of the spectral density, a surprising new feature of the NFL phase close to the quantum critical point (QCP) was uncovered – the pair fluctuation free energy resembles that of the Sachdev-Ye-Kitaev (SYK) Sachdev and Ye (1993); Kitaev (2015); Maldacena and Stanford (2016) model in the limit where the self-energy residue is much greater than the temperature (strong coupling limit). The SYK model has garnered recent interest (see Rosenhaus (2019) and references therein) due to its connections to gravitational models describing black holes Kitaev (2015). Motivated by these observations, Phillips and coworkers Phillips _et al._ revived an exactly solvable but startlingly simple microscopic model by Hatsugai and Kohmoto (HK) Hatsugai and Kohmoto (1992) that hosts finite frequency LSs in the Mott phase. Upon doping the Mott insulator, they found that the elementary pair excitations are formed by doublons and holons as opposed to Bogoliubov quasiparticles. That such a mapping between the SYK model and LS fluctuation thermodynamics should work so well, yielding an exact but simple picture of pair excitations upon doping, is unsettling and can be critiqued as coincidental – so far, there exists no symmetry based argument invoked whatsoever to justify the mapping, nor is there a consistent analysis that places interaction vertices and self-energies on an equal footing in accordance with the Ward-Takahashi identity Nambu (1960); Schrieffer (2018). It is the purpose of this paper to show that LS models, such as the one by HK, share the symmetries of the $q\rightarrow\infty$ limit of the SYK model with a $q$-body interaction, and that the propagator resulting from gauge-invariant pair fluctuations on a LS is the SYK conformal Green function. Thus our work gives a simple justification for the robustness of the mapping between pair fluctuations on a LS and SYK models. More specifically, we demonstrate that LS models where the self-energy has a simple pole acquire an infinite-$q$ reparametrization symmetry in the low-energy scaling limit. The proof we provide follows along the lines of its original formulation in context of the SYK model by the authors of Refs. Sachdev (2015); Kitaev (2015); Maldacena and Stanford (2016). To contrast, however, we work with a self-energy that is proportional to the single particle non-interacting Green function – a characteristic of the HK or phenomenological models proposed in Refs. Yang _et al._ (2006); Konik _et al._ (2006b) where the original FS is converted into a LS. This is a key feature of our analysis that distinguishes it from the random matrix model (or the $q=2$ SYK model). In the latter scenario, the self energy is proportional to a linear power of the total Green function in the presence of random hopping matrix elements; therefore, there exist no propagator zeros rendering the model effectively non-interacting. A further exact gauge-invariant evaluation of the pair response from the model LS Green function is essential for the theory to maintain charge conservation. This treatment follows from the Ward-Takahashi Nambu (1960); Schrieffer (2018) identity and ensures that interaction vertices and self-energies are placed on an equal footing. Recent progress has been made in this direction for several phenomenological models describing the Cuprate pseudo-gap Boyack _et al._ (2016); Boyack (2017); Dai and Lee (2017); He _et al._ (2017); Guo _et al._ (2018). We find that gauge-invariance results in a $\frac{1}{\sqrt{\tau}}$ ($\tau\sim T^{-1}$) behavior of the fluctuation propagator in the strong coupling limit – a feature of the fundamental, particle-hole asymmetric, SYK fermion. As a consequence, the fluctuation free-energy, computed exactly from the gauge-invariant pair response, mimics leading order SYK fluctuations in the strong coupling limit. Furthermore, we find that gauge-invariance enforces the fluctuation density of states obtained from the partition function in the static, long-wavelength limit to exhibit a $\omega^{-1}$ behavior at low energies indicating small energy spacing in the fluctuation spectrum. In the weak coupling limit, vertex corrections only have quantitative effects on the phase diagram. The conclusions of our work point toward the existence of a mapping between effective theories of fluctuations on model LSs such as the HK model and models of gravity. ## II Models and Results Models: Various phenomenological Norman _et al._ (1998); Yang _et al._ (2006); Konik _et al._ (2006b), microscopic Baskaran (1991); Hatsugai and Kohmoto (1992); Essler and Tsvelik (2002); Konik _et al._ (2006a); Stanescu and Kotliar (2006); Rosch (2007); Sakai _et al._ (2009); Eder _et al._ (2011); Hong and Phillips (2012); Dave _et al._ (2013); Scheurer _et al._ (2018) and holographic Edalati _et al._ (2011a, b) models have been used as relevant starting points for describing LSs. Some of these models have been used extensively to understand and interpret Dzyaloshinskii (1996, 2003); Essler and Tsvelik (2002); Konik _et al._ (2006a); Yang _et al._ (2006); Altshuler _et al._ (1998); Stanescu and Kotliar (2006); Stanescu _et al._ (2007); Rosch (2007); Dave _et al._ (2013); Berthod _et al._ (2006); Sakai _et al._ (2009); Phillips (2006); Valenzuela and Bascones (2007); Norman _et al._ (2007); Vanacore _et al._ (2018); Scheurer _et al._ (2018) a variety of experimental observations in the strongly interacting phase of the Cuprates Timusk and Statt (1999); Yoshida _et al._ (2006); Vishik _et al._ (2010); He _et al._ (2011); Yang _et al._ (2011); Chakravarty (2010). The simplest among them are models where the full interacting inverse Green function, $G(\textbf{p},i\epsilon_{n})^{-1}=i\epsilon_{n}-\xi(\textbf{p})-\Sigma(\textbf{p},i\epsilon_{n})$, contains a self-energy of the form Norman _et al._ (1998); Yang _et al._ (2006); Konik _et al._ (2006b) $\displaystyle\Sigma(\textbf{p},i\epsilon_{n})=\frac{u^{2}}{i\epsilon_{n}+\xi(\textbf{p})}.$ (1) Here we have defined $\xi(\textbf{p})=\epsilon(\textbf{p})-\mu$ as the band dispersion with chemical potential $\mu$, $\epsilon_{n}$ is the fermionic frequency and $u$ sets the energy scale of the residue of the pole. The microscopic HK Hamiltonian offers an interpretation of the parameter $u$ as a four-body interaction term where only scattering process which conserve the total center of mass are included. In Fourier transformed space, this Hamiltonian takes a particularly simple form $H_{\rm HK}=\sum_{\textbf{k}}H_{\textbf{k}}=\sum_{\textbf{k}}\left[\xi_{\textbf{k}}(n_{\textbf{k}\uparrow}+n_{\textbf{k}\downarrow})+2u~{}n_{\textbf{k}\uparrow}\,n_{\textbf{k}\downarrow}\right],$ (2) where $n_{\textbf{k}\sigma}$ is the number operator for a state with momentum k and spin $\sigma$. Hence, the HK model is simply a Hubbard model in momentum space where the interaction term commutes with the kinetic energy and is exactly solvable. In the scenario where the interaction $2u$ is larger than the bandwidth of the non-interacting bands, there exists a Mott gap between an upper and lower Hubbard-like bands (although the model is too simple to capture any dynamical spectral weight transfer effects which the Hubbard model does). A more recent analysis Yang (2020) of Fermi arcs and pseudo-gap in the HK model lead to the interpretation of $u$ as the pseudo-gap order parameter in the normal state. This is in line with expectation from previous works Norman _et al._ (1998); Konik _et al._ (2006b); Yang _et al._ (2006). The propagator of the model Hamiltonian in Eq. 2 depends on the occupation numbers of the upper and lower Hubbard bands, and a LS is not generally obtained at all momenta for arbitrary chemical potential and occupation numbers. However, when the occupancies of the upper and lower bands are equal and the chemical potential lies between the bands, a LS with self-energy written in Eq. 1 with a (renormalized) dispersion $\xi(\textbf{p})\rightarrow-\xi(\textbf{p})$ is ensured at all momenta (see Refs. Hatsugai and Kohmoto (1992); Phillips _et al._ (2018, ) for the self-energy in the HK model). This statement is exact and non-perturbative and holds even when the parameter $u$ is taken to infinity. Therefore, close to their respective LSs, the distinction between the Green function of Eq. 2 and that corresponding to model Eq. 1 vanishes. Reparametrization symmetry: We now take a closer look at the self-energy (Eq. 1) appearing in the total Green function $G(\textbf{p},i\epsilon_{n})$. This equation can be rewritten as $\displaystyle\Sigma(\textbf{p},i\epsilon_{n})=-u^{2}G_{0}(\textbf{p},-i\epsilon_{n})$ (3) where $G_{0}(\textbf{p},i\epsilon_{n})$ is the non-interacting Green function. As in the zero dimensional SYK model Kitaev (2015); Maldacena and Stanford (2016), we will be interested in the low-energy scaling limit $i\epsilon_{n}\rightarrow 0$ where one anticipates reparametrization invariance for finite interactions. Equivalently, this corresponds to a Green function contribution on the LS ($\xi(\textbf{p})=0$) where the self-energy diverges and its spatial structure is washed out as $\beta u\rightarrow\infty$. Therefore, the following equations hold in this limit $\displaystyle G(\textbf{p},i\epsilon_{n})$ $\displaystyle\simeq$ $\displaystyle-\Sigma(\textbf{p},i\epsilon_{n})^{-1}\simeq-\frac{i\epsilon_{n}}{u^{2}}\equiv G(i\epsilon_{n})$ (4) $\displaystyle\Sigma(\textbf{p},i\epsilon_{n})$ $\displaystyle\simeq$ $\displaystyle\frac{u^{2}}{i\epsilon_{n}}\equiv u^{2}G_{0}(i\epsilon_{n})\equiv\Sigma(i\epsilon_{n}).$ (5) Here we have defined the local quantities $G(i\epsilon_{n})$, $G_{0}(i\epsilon_{n})$, and $\Sigma(i\epsilon_{n})$. Transforming into imaginary time coordinate and invoking time translational invariance we have the total Green function from Eq. 4 $\displaystyle G(\tau-\tau^{\prime})$ $\displaystyle=$ $\displaystyle\frac{\delta^{\prime}(\tau-\tau^{\prime})}{u^{2}}$ (6) where $\delta^{\prime}(\tau)$ is the temporal derivative of the Dirac delta function. We can also evaluate the imaginary time self-energy as well as the convolution integral of $G(\tau-\tau^{\prime})\Sigma(\tau^{\prime}-\tau^{\prime\prime})$ in the same limit. Using Eqs. 3, 4, 5, this gives us $\displaystyle\Sigma(\tau-\tau^{\prime})=u^{2}G_{0}(\tau-\tau^{\prime})$ $\displaystyle=$ $\displaystyle\frac{u^{2}}{2}\text{sgn}(\tau-\tau^{\prime})$ (7) $\displaystyle\int d\tau^{\prime}G(\tau-\tau^{\prime})\Sigma(\tau^{\prime}-\tau^{\prime\prime})$ $\displaystyle=$ $\displaystyle-\delta(\tau-\tau^{\prime\prime}).$ (8) The equations 7 and 8 above are written in proximity to the LS and have some notable features. First, as in SYK type models, they are invariant under a reparametrization symmetry transformation $\tau\rightarrow f(\tau)$ and $\displaystyle G(\tau-\tau^{\prime})$ $\displaystyle\rightarrow$ $\displaystyle\left[f^{\prime}(\tau^{\prime})f^{\prime}(\tau)\right]G\left(f(\tau)-f(\tau^{\prime})\right)$ (9) $\displaystyle\Sigma(\tau-\tau^{\prime})$ $\displaystyle\rightarrow$ $\displaystyle\Sigma\left(f(\tau)-f(\tau^{\prime})\right).$ (10) Figure 1: Fluctuation propagator in the Cooper channel (zig-zag lines) defined through the Bethe-Salpeter equations. The black solid disk and the shaded triangle denote the bare interaction vertex and vertex corrections arising from electron correlations respectively. The thick black lines denote total electron Green function. The above scaling behavior of $G(\tau-\tau^{\prime})$ and $\Sigma(\tau-\tau^{\prime})$ in the Eqs. 9 and 10 under reparametrization is same as the $\Delta^{-1}\equiv q\rightarrow\infty$ limit of the $q$-body SYK model Maldacena and Stanford (2016) provided the roles of $G(\tau-\tau^{\prime})$ and $\Sigma(\tau-\tau^{\prime})$ are swapped (the other choice of $\Delta^{-1}\equiv q=1$ is unphysical). This exchange is an important feature that completes the map to the conformal limit of infinite- body SYK model. More specifically, it highlights the duality between self- energies in gapped models (of the form appearing in Eq. 1) close to a LS and propagators in CFTs. That is, a theory characterized by a propagator defined by the self-energy of a gapped LS is conformal in the sense of the derivation leading to Eq 10. It is notable that in the special limit of $q\rightarrow\infty$, the SYK model also has a reparametrization invariant propagator; however, the propagator in Eq 9 only transforms covariantly. Similarly, the self-energy in Eq. 10 maintains the full symmetry of the reparametrization group. This is unlike the SYK model where the self-energy transforms only covariantly under reparametrizations for all $q\geq 2$. In both cases, however, the transformation properties of the propagator and self- energies act to leave Eqs. 7 and 8 reparametrization invariant. This is the key reason why the generic form of fluctuations of the two aforementioned duals cannot be distinguished from one another (as will be shown below). Second, the forms of the Eqs. 7 and 8, while seemingly similar to the random matrix model (or the $q=2$ SYK model), are not the same – in the current model with a LS, the self-energy in Eq. 7 is proportional to the linear power of the non-interacting Green function as opposed to the random matrix model where the full Green function appears. This is an important distinction as the solution of the random matrix model does not contain zeros of the Green function and is therefore an effectively non-interacting in the presence of long-range disorder. Finally, it can be verified that arguments leading to Eqs. 7, 8, 9 and 10 also hold for the diagonal elements of a BCS Green function with the interaction parameter $u$ replaced by the superconducting order parameter. In this sense, they belong to the same reparametrization symmetry class near the LS. However, our focus in the following section is on fluctuations in the non- superconducting state; hence, we will not have off-diagonal long range order through anomalous contributions to the pair susceptibility. At this point, we emphasize that while the parameter $u$ begs such an interpretation, it does not play the role of a superconducting gap in our work. It is rather the interaction (“Mott”) gap that exists even above the instability temperature. From Eqs. 7 8 and 9 10, and the remarks that follow, we rightly anticipate – further supported by the original derivation for the SYK model Maldacena and Stanford (2016) and later for LS models with a simple pole in the self-energy Setty (2020) – that the leading order free energy contribution from pair- fluctuations in the NFL phase close to the QCP takes the form $-\beta F=\beta u^{*}-\gamma\ln(\beta u^{*})$, where $u^{*}$ is the QCP, $\beta$ is inverse temperature, and $\gamma$ is a constant that determines the fluctuation density of states. This is demonstrated below. Ward-Takahashi identity and exact vertex: We now set out to evaluate the fluctuation free energy and density of states from the gauge-invariant fluctuation propagator and pair susceptibility by approaching the QCP from the NFL side. The Bethe-Salpeter equation for the fluctuation propagator is shown in Fig. 1 in terms of the fully interacting pair bubble. An immediate difficulty in evaluating the gauge-invariant pair bubble is that one requires a knowledge of the exact fluctuation vertex in models of LSs. This is a-priori not straightforward if one chooses to obtain the vertex directly from the Hamiltonian, especially in LS ansatz models such as those espoused in Refs. Yang _et al._ (2006); Konik _et al._ (2006b) where the underlying electronic degrees of freedom are unspecified. However, and regardless of whether the initial Hamiltonian is known, one can resort to the Ward-Takahashi identity Nambu (1960); Schrieffer (2018) to obtain the full vertex provided the exact self-energy is known. This is because once the self-energy is fixed, charge conservation restricts the form of the vertex function. In fact this approach has been recently advocated in several works describing phenomenological models of the Cuprate pseudo-gap Boyack _et al._ (2016); Boyack (2017); Dai and Lee (2017); He _et al._ (2017); Guo _et al._ (2018). Figure 2: Feynman diagrams defining the vertex corrections through the Ward identity. The thick solid (dashed) lines are the interacting (non-interacting) Green functions. The shaded triangle denotes vertex corrections arising from electron correlations. The dotted lines denote a generic external momentum transfer into and away from the pair bubble. Since the self-energy is known exactly in our case, we can proceed with the Ward-Takahashi identity. This identity relates the pair-vertex to the interacting Green function. For a Matsubara frequency $iq_{n}$ and momentum transfer q, it takes the form $\displaystyle- iq_{n}\Gamma_{0}(p+q,p)+\textbf{q}\cdot\mathbf{\Gamma}(p+q,p)=G^{-1}(p)-G^{-1}(p+q)$ where the vertex function $\Gamma_{\mu}\equiv\left(\Gamma_{0},\mathbf{\Gamma}\right)$ for the charge and current and we introduce the notation $q\equiv\left(iq_{n},\textbf{q}\right)$. Using the definition $G(p)^{-1}=ip_{n}-\xi(\textbf{p})-\Sigma(p)$, we arrive at an expression for the exact vertex $\displaystyle\Gamma_{\mu}(p+q,p)$ $\displaystyle=$ $\displaystyle\gamma_{\mu}\left(1+\frac{u^{2}}{\left(ip_{n}+iq_{n}+\xi_{\textbf{p}+\textbf{q}}\right)\left(ip_{n}+\xi_{\textbf{p}}\right)}\right)$ (12) $\displaystyle=$ $\displaystyle\gamma_{\mu}\left(1\pm u^{2}G_{0}(-p-q)G_{0}(-p)\right),$ where $\gamma_{\mu}=\left(1,\textbf{p}+\frac{\textbf{q}}{2}\right)$ is the non-interacting vertex and the $\pm$ hold for the charge and current vertices respectively. As we are interested in fluctuations in the non-superconducting state, we have ignored collective mode contributions to the vertex that are proportional to the superconducting order parameter Boyack _et al._ (2016); He _et al._ (2017). We now use the charge vertex in Eq. 12 to obtain the gauge-invariant pair susceptibility (shown diagrammatically in Fig. 2) $\displaystyle\Pi(\textbf{q},iq_{n})$ $\displaystyle=$ $\displaystyle\frac{1}{\beta(2\pi)^{d}}\sum_{\epsilon_{n}}\int d^{d}\textbf{p}~{}G(p+q)G(-p)\Gamma_{0}(-p,p+q)$ (14) $\displaystyle\equiv$ $\displaystyle\Pi_{0}(\textbf{q},iq_{n})+\Pi_{\Gamma}(\textbf{q},iq_{n}).$ Here we have decomposed the total susceptibility into the bare and interaction corrected vertex terms as appearing on the right hand side in Fig. 2. Note that we have defined the pair susceptibility above as well as in Ref. Setty (2020) in a “symmetric” scheme (see for example Chen _et al._ (2005) and references therein) where both the Green functions are interacting. This scheme leads to the $T=0$ ground state wave function described in Phillips _et al._ . It is also interesting to seek solutions in the “asymmetric” Chen _et al._ (2005) scheme (one interacting and the other non-interacting Green function), which will generally yield a different ground state, but we will not address this case here. We now proceed to evaluate the momentum integral by making a similar decomposition $\displaystyle I(q)$ $\displaystyle=$ $\displaystyle\int d^{d}\textbf{p}~{}G(p+q)G(-p)\Gamma_{0}(-p,p+q)$ (15) $\displaystyle\equiv$ $\displaystyle I_{0}(q)+I_{\Gamma}(q)$ (16) with the definitions $I_{0}(q)=\int d^{d}\textbf{p}~{}G(p+q)G(-p)$ and $I_{\Gamma}(q)=u^{2}\int d^{d}\textbf{p}~{}G(p+q)G_{0}(-p-q)G(-p)G_{0}(p)$. Below we solve for the total pair susceptibility in the strong coupling limit where the interaction $u\gg T$. The effects of vertex corrections on the weak coupling result ($u\ll T$) in Ref. Setty (2020) is only quantitative and henceforth we ignore this case. Figure 3: A schematic of the $u$-$T$ phase diagram. The gray (blue) contours are the strong coupling $\beta u\equiv\kappa\gg 1$ (weak coupling $\beta u\ll 1$) phase boundary and the red line denotes a Fermi liquid. The normal state at strong coupling is a non-Fermi liquid. $T_{c0}$ is defined as $T_{c}(u=0)$ and $u_{c\infty}$ as $u_{c}(\beta\rightarrow\infty)$ (green dot). Strongly coupled ($\beta u\gg 1$) fluctuations: In order to study the pairing instability, we are primarily interested in deriving the static, long- wavelength limit of the fluctuation propagator. Hence, we take the limiting conditions $iq_{n}\rightarrow 0$ and $r\equiv\frac{p_{f}|\textbf{q}|}{m}\ll u$, where $m$ and $p_{f}$ are the mass and Fermi momentum from the non- interacting electronic dispersion respectively. In this regime, the expression for $\Pi_{0}(\textbf{q},iq_{n})$ to second order in $r$ takes the form Setty (2020) (odd powers in $r$ vanish due to the angular integral) $\displaystyle\Pi_{0}(\textbf{q},iq_{n}\rightarrow 0)$ $\displaystyle\simeq$ $\displaystyle\Pi^{(0)}_{0}(0,0)+\Pi^{(2)}_{0}(\textbf{q},0),$ (17) $\displaystyle\Pi^{(0)}_{0}(0,0)$ $\displaystyle=$ $\displaystyle\frac{m}{4}\left(2S_{1}-u^{2}S_{3}\right)$ $\displaystyle\Pi^{(2)}_{0}(\textbf{q},0)$ $\displaystyle=$ $\displaystyle-\frac{mr^{2}}{32}\left(2S_{3}-u^{2}S_{5}\right),$ where $S_{\nu}=\frac{1}{\beta}\sum_{\epsilon_{n}}(\epsilon_{n}^{2}+u^{2})^{-\nu/2}$ with $\nu$ an odd integer. It should be noted that, for each order in $r$ that is non-vanishing, there is a term proportional to the residue $u^{2}$. Performing the small momentum expansion and using the same procedure described in Ref. Setty (2020), on can similarly obtain the pair susceptibility from the vertex correction term given by $\Pi_{\Gamma}(\textbf{q},iq_{n}\rightarrow 0)\simeq\Pi^{(0)}_{\Gamma}(0,0)+\Pi^{(2)}_{\Gamma}(\textbf{q},0)$ where $\displaystyle\Pi^{(0)}_{\Gamma}(0,0)$ $\displaystyle=$ $\displaystyle\frac{mu^{2}}{4}S_{3},$ (18) $\displaystyle\Pi^{(2)}_{\Gamma}(\textbf{q},0)$ $\displaystyle=$ $\displaystyle-\frac{mu^{2}r^{2}}{32}S_{5}.$ In this case, however, only terms proportional to $u^{2}$ survive and have the opposite sign compared to those appearing in the bare susceptibility in Eqs. 17. Substituting the vertex correction terms back into the full expression for the pair susceptibility $\Pi(\textbf{q},iq_{n})\equiv\Pi_{0}(\textbf{q},iq_{n})+\Pi_{\Gamma}(\textbf{q},iq_{n})$, the terms proportional to $u^{2}$ from the bare susceptibility and vertex corrections cancel. The remaining Matsubara summations $S_{\nu}$ can be performed exactly, and we obtain the strong coupling ($\beta u\gg 1$), static, long-wavelength limit of the inverse fluctuation propagator, denoted $L^{-1}(\textbf{q},iq_{n}\rightarrow 0)$, from Fig. 1 as (see Ref. Setty (2020) for further details) $\displaystyle L^{-1}(\textbf{q},iq_{n}\rightarrow 0)$ $\displaystyle=$ $\displaystyle-g^{-1}+N_{0}\left[\ln\frac{\Lambda}{u}-\frac{\sqrt{2\pi}~{}e^{-\beta u}}{\sqrt{\beta u}}\right]$ (19) $\displaystyle-$ $\displaystyle\frac{N_{0}r^{2}}{8u^{2}}\left(1-\sqrt{2\pi\beta u}~{}e^{-\beta u}\right).$ Here $g$ is the bare superconducting interaction, $N_{0}$ is the density of states of the non-interacting FS and $\Lambda$ is the Matsubara cut-off for the summation when $\nu=1$. We can now derive the QCP separating the superconducting and NFL phases by setting $\beta u\rightarrow\infty$ and seeking a critical $u$ for which the fluctuation propagator diverges. This gives us $u^{*}\equiv u_{c\infty}=\Lambda e^{-1/N_{0}g}$, and the contour of instability for low but non-zero temperatures is shown in the phase diagram in Fig. 3. A notable feature of the strongly coupled superconductor obtained here and in Ref. Setty (2020) is that it results from a gapped LS for a finite interaction $g$. That is, and unlike the BCS case, pairing in the current model is made possible by an attractive interaction $g$ that is above a critical value set by the formula $u=\Lambda e^{-1/N_{0}g}$. Hence, doping does not play a crucial role in this problem. This must be contrasted with quantum critical BCS superconductors obtained from pairing gapless conformal fermions She and Zaanen (2009); She _et al._ (2011) or mediated by quantum critical bosons Moon and Chubukov (2010). Fluctuation properties close to the QCP can be further extracted by setting $u=u_{c\infty}$ in the fluctuation propagator in Eq. 19 which yields $\displaystyle L^{-1}\left(\textbf{q}\rightarrow 0,iq_{n}=0\right)|_{u\rightarrow u_{c\infty}}\simeq N_{0}\frac{e^{-\beta u_{c\infty}}}{\sqrt{\beta u_{c\infty}}}.$ (20) It is worthwhile to examine the consequences of Eq. 20 obtained by respecting the Ward identity. First, while gauge invariance leaves the QCP unaffected in comparison with the non-gauge invariant approximation Setty (2020), it modifies the exponent of the $\beta u_{c\infty}$ factor in front of the exponential in Eq. 20 from $\frac{1}{2}$ to $-\frac{1}{2}$. Therefore, the structure of the fluctuation propagator near zero temperature in Eq. 20 takes a form similar to the particle-hole asymmetric conformal propagator discussed in Patel and Sachdev (2019) with the substitution $T^{-1}\rightarrow\tau$. The $\frac{1}{\sqrt{\tau}}$ dependence is a signature of NFL transport and has been used to describe Patel and Sachdev (2019); Sachdev (2015) Planckian behavior Legros _et al._ (2019); Nakajima _et al._ (2019) and universal scattering rates Bruin _et al._ (2013) observed in a variety of strongly correlated materials. Second, following the methods described in Larkin and Varlamov (2005); Varlamov _et al._ (2018), one can evaluate the fluctuation free energy from Eq. 20 to obtain $-\beta F=\beta u_{c\infty}-\gamma\ln(\beta u_{c\infty})$ with $\gamma=-\frac{1}{2}$. Thus, despite a change in sign of the coefficient $\gamma$ when compared to the non-gauge invariant analysis, the form of the free energy remains the same with the inclusion of vertex corrections, as was anticipated in Ref. Setty (2020), and takes the form of leading order $[O(N^{0})]$ SYK fluctuation terms Maldacena and Stanford (2016). A further Laplace transform yields a fluctuation density of states proportional to $\frac{1}{\omega}$ at low energy. In the non-gauge invariant calculation, one instead obtains a weaker density of states divergence $\frac{1}{\sqrt{\omega}}$. Finally, vertex corrections render a negative fluctuation correction to the logarithmically divergent term in Eq. 19. Therefore, the curvature of the strong coupling phase diagram is reversed at finite temperatures away from the QCP (see Fig. 3). ## III Discussion Despite the fluctuation free energy acquiring the same form as the corresponding leading order $O(N^{0})$ fluctuation contribution of the SYK model, the constant $\gamma$ determining the fluctuation density of states is different in the two cases. This should not come as a surprise since the leading order fluctuation terms for the SYK model are melon diagrams as opposed to pair bubbles that traditionally appear in the theory of fluctuation superconductivity. As emphasized earlier, the self-energy in LS models is proportional to a linear power of the non-interacting Green function (see Eq. 3). This is an important property of LSs that is crucial for the symmetry mapping to the $q\rightarrow\infty$ SYK model to work. If the self-energy was, instead, proportional to the total Green function (or the $q$=2 SYK), the model would effectively become the non-interacting random matrix model with no LSs. It is interesting that the form of the gauge-invariant fluctuation propagator near the QCP in Eq. 20 has a $\frac{1}{\sqrt{\tau}}$ dependence with the time scale $\tau$ set by $T^{-1}$. Ref. Patel and Sachdev (2019) used such a form of the conformal propagator and showed that resonant processes produce Planckian scattering rates Legros _et al._ (2019); Nakajima _et al._ (2019) with universal coefficients Bruin _et al._ (2013) independent of interactions. This work, therefore, motivates an evaluation of fluctuation transport quantities such as paraconductivity in the NFL phase close to the QCP using the Larkin-Varlamov formalism Larkin and Varlamov (2005); Setty (2019). It is also interesting to ask whether universality of the butterfly velocity or “information screening length” Baggioli _et al._ (2018) at the QCP holds in the context of Luttinger surfaces. Finally, in the HK model, the robustness of LSs depends crucially on the ratio of interaction parameter $u$ (or the Mott gap) and bandwidth $W$ of the non-interacting bands Phillips _et al._ . If $W>2u$, the LS exists only in certain parts of the Brillouin zone. Same holds true if $2u>W$ and the system is doped so that the chemical potential is located in one of the Hubbard bands. In either of these cases, one still has extensively many maps to the $q$$\rightarrow\infty$ SYK model – one for each momentum point where the LS exists. However, the QCP is avoided and the form of the free-energy mapping between the models is lost. To conclude, we have shown there exists a low-energy reparametrization symmetry in models which host LSs where the self-energy has a simple pole. The transformation properties of the Green function and self-energy can be mapped to the $q$$\rightarrow\infty$ limit of the SYK model. The corresponding mapping of the fluctuation action is robust to inclusion of interaction vertices through the Ward identity, and the subsequent $\frac{1}{\sqrt{\beta u_{c\infty}}}$ behavior of the fluctuation propagator indicates NFL transport. A LS model of particular interest is the microscopic model by HK Hatsugai and Kohmoto (1992). In addition to the absence of random interactions, a key simplification of the HK Hamiltonian in Eq. 2 is that the interaction terms commute with the kinetic energy making it exactly solvable Hatsugai and Kohmoto (1992); Phillips _et al._ (2018, ). Despite this simplicity, the model is sufficient to capture important physical phenomena such as LSs, Mott gap and doublon-holon ”Cooper” pairing. Moreover, the model is not restricted to zero dimensions and can be extended to any higher dimensions. Thus the HK model does exactly what is expected of any minimal model – strip down the full interacting problem to its basic ingredients for describing the most interesting physics. Therefore the problem of pairing instabilities in model LSs, such as those realized in simple microscopic models like HK, lays a firm groundwork toward understanding more sophisticated models exhibiting conformal field theory–gravity duality Zaanen _et al._ (2015) and is worth further exploration. Acknowledgements: We thank P. W. Phillips, G. La Nave and L. Yeo for critical comments. This work is supported by the DOE grant number DE-FG02-05ER46236. ## References * Bardeen _et al._ (1957) J. Bardeen, L. N. Cooper, and J. R. Schrieffer, “Theory of superconductivity,” Phys. Rev. 108, 1175–1204 (1957). * Abrikosov _et al._ (1965) Aleksei Alekseevich Abrikosov, Lev Petrovich Gorkov, and Igor Ekhielevich Dzyaloshinskii, _Quantum field theoretical methods in statistical physics_ , Vol. 4 (Pergamon, 1965). * Norman _et al._ (1998) M. R. Norman, M. Randeria, H. Ding, and J. C. Campuzano, “Phenomenology of the low-energy spectral function in high-${T}_{c}$ superconductors,” Phys. Rev. B 57, R11093–R11096 (1998). * Essler and Tsvelik (2002) Fabian HL Essler and Alexei M Tsvelik, “Weakly coupled one-dimensional mott insulators,” Physical Review B 65, 115117 (2002). * Konik _et al._ (2006a) RM Konik, TM Rice, and AM Tsvelik, “Doped spin liquid: Luttinger sum rule and low temperature order,” Physical review letters 96, 086407 (2006a). * Yang _et al._ (2006) Kai-Yu Yang, TM Rice, and Fu-Chun Zhang, “Phenomenological theory of the pseudogap state,” Physical Review B 73, 174501 (2006). * Stanescu and Kotliar (2006) Tudor D Stanescu and Gabriel Kotliar, “Fermi arcs and hidden zeros of the green function in the pseudogap state,” Physical Review B 74, 125110 (2006). * Hong and Phillips (2012) Seungmin Hong and Philip Phillips, “Towards the standard model for fermi arcs from a wilsonian reduction of the hubbard model,” Physical Review B 86, 115118 (2012). * Scheurer _et al._ (2018) Mathias S Scheurer, Shubhayu Chatterjee, Wei Wu, Michel Ferrero, Antoine Georges, and Subir Sachdev, “Topological order in the pseudogap metal,” Proceedings of the National Academy of Sciences 115, E3665–E3672 (2018). * Taillefer (2010) Louis Taillefer, “Scattering and pairing in cuprate superconductors,” Annu. Rev. Condens. Matter Phys. 1, 51–70 (2010). * Timusk and Statt (1999) Tom Timusk and Bryan Statt, “The pseudogap in high-temperature superconductors: an experimental survey,” Reports on Progress in Physics 62, 61 (1999). * Yoshida _et al._ (2006) T Yoshida, XJ Zhou, K Tanaka, WL Yang, Z Hussain, Z-X Shen, A Fujimori, S Sahrakorpi, M Lindroos, RS Markiewicz, _et al._ , “Systematic doping evolution of the underlying fermi surface of la 2- x sr x cu o 4,” Physical Review B 74, 224510 (2006). * Vishik _et al._ (2010) IM Vishik, WS Lee, RH He, M Hashimoto, Z Hussain, TP Devereaux, and ZX Shen, “Arpes studies of cuprate fermiology: superconductivity, pseudogap and quasiparticle dynamics,” New Journal of Physics 12, 105008 (2010). * He _et al._ (2011) Rui-Hua He, XJ Zhou, M Hashimoto, T Yoshida, K Tanaka, SK Mo, T Sasagawa, N Mannella, W Meevasana, Hong Yao, _et al._ , “Doping dependence of the ($\pi$, $\pi$) shadow band in la-based cuprates studied by angle-resolved photoemission spectroscopy,” New Journal of Physics 13, 013031 (2011). * Yang _et al._ (2011) H.-B. Yang, J. D. Rameau, Z.-H. Pan, G. D. Gu, P. D. Johnson, H. Claus, D. G. Hinks, and T. E. Kidd, “Reconstructed fermi surface of underdoped ${\mathrm{bi}}_{2}{\mathrm{sr}}_{2}{\mathrm{cacu}}_{2}{\mathrm{o}}_{8+\delta}$ cuprate superconductors,” Phys. Rev. Lett. 107, 047003 (2011). * Chakravarty (2010) Sudip Chakravarty, “Key issues in theories of high temperature superconductors,” arXiv preprint arXiv:1006.4180 (2010). * Dzyaloshinskii (1996) Igor Dzyaloshinskii, “Extended van-hove singularity and related non-fermi liquids,” Journal de Physique I 6, 119–135 (1996). * Dzyaloshinskii (2003) Igor Dzyaloshinskii, “Some consequences of the luttinger theorem: The luttinger surfaces in non-fermi liquids and mott insulators,” Physical Review B 68, 085113 (2003). * Konik _et al._ (2006b) RM Konik, TM Rice, and AM Tsvelik, “Doped spin liquid: Luttinger sum rule and low temperature order,” Physical review letters 96, 086407 (2006b). * Altshuler _et al._ (1998) BL Altshuler, AV Chubukov, A Dashevskii, AM Finkel’stein, and DK Morr, “Luttinger theorem for a spin-density-wave state,” EPL (Europhysics Letters) 41, 401 (1998). * Rosch (2007) A Rosch, “Breakdown of luttinger’s theorem in two-orbital mott insulators,” The European Physical Journal B 59, 495–502 (2007). * Dave _et al._ (2013) Kiaran B Dave, Philip W Phillips, and Charles L Kane, “Absence of luttinger’s theorem due to zeros in the single-particle green function,” Physical review letters 110, 090403 (2013). * Berthod _et al._ (2006) Christophe Berthod, Thierry Giamarchi, S Biermann, and Antoine Georges, “Breakup of the fermi surface near the mott transition in low-dimensional systems,” Physical review letters 97, 136401 (2006). * Sakai _et al._ (2009) Shiro Sakai, Yukitoshi Motome, and Masatoshi Imada, “Evolution of electronic structure of doped mott insulators: Reconstruction of poles and zeros of green?s function,” Physical review letters 102, 056404 (2009). * Phillips (2006) Philip Phillips, “Mottness,” Annals of Physics 321, 1634–1650 (2006). * Valenzuela and Bascones (2007) B Valenzuela and Elena Bascones, “Phenomenological description of the two energy scales in underdoped cuprate superconductors,” Physical review letters 98, 227002 (2007). * Norman _et al._ (2007) MR Norman, A Kanigel, M Randeria, U Chatterjee, and JC Campuzano, “Modeling the fermi arc in underdoped cuprates,” Physical Review B 76, 174501 (2007). * Vanacore _et al._ (2018) Garrett Vanacore, Srinidhi T Ramamurthy, and Philip W Phillips, “Evolution of holographic fermi arcs from a mott insulator,” Journal of High Energy Physics 2018, 9 (2018). * Stanescu _et al._ (2007) Tudor D Stanescu, Philip Phillips, and Ting-Pong Choy, “Theory of the luttinger surface in doped mott insulators,” Physical Review B 75, 104503 (2007). * Setty (2020) Chandan Setty, “Pairing instability on a luttinger surface: A non-fermi liquid to superconductor transition and its sachdev-ye-kitaev dual,” Physical Review B 101, 184506 (2020). * Sachdev and Ye (1993) Subir Sachdev and Jinwu Ye, “Gapless spin-fluid ground state in a random quantum heisenberg magnet,” Physical review letters 70, 3339 (1993). * Kitaev (2015) A Kitaev, http://online.kitp.ucsb.edu/online/entangled15/kitaev/ (2015). * Maldacena and Stanford (2016) Juan Maldacena and Douglas Stanford, “Remarks on the sachdev-ye-kitaev model,” Physical Review D 94, 106002 (2016). * Rosenhaus (2019) Vladimir Rosenhaus, “An introduction to the syk model,” Journal of Physics A: Mathematical and Theoretical 52, 323001 (2019). * (35) Philip W Phillips, Luke Yeo, and Edwin W Huang, “Exact superconducting instability in a doped mott insulator,” arXiv:1912.01008 (To appear Nat. Phys) . * Hatsugai and Kohmoto (1992) Yasuhiro Hatsugai and Mahito Kohmoto, “Exactly solvable model of correlated lattice electrons in any dimensions,” Journal of the Physical Society of Japan 61, 2056–2069 (1992). * Nambu (1960) Yoichiro Nambu, “Quasi-particles and gauge invariance in the theory of superconductivity,” Physical Review 117, 648 (1960). * Schrieffer (2018) J Robert Schrieffer, _Theory of superconductivity_ (CRC Press, 2018). * Sachdev (2015) Subir Sachdev, “Bekenstein-hawking entropy and strange metals,” Physical Review X 5, 041025 (2015). * Boyack _et al._ (2016) Rufus Boyack, Brandon M Anderson, Chien-Te Wu, and K Levin, “Gauge-invariant theories of linear response for strongly correlated superconductors,” Physical Review B 94, 094508 (2016). * Boyack (2017) Rufus M Boyack, _Establishing a Consistent Theory of Transport in Strongly Correlated Fermi Superfluids_ (The University of Chicago, 2017). * Dai and Lee (2017) Zhehao Dai and Patrick A Lee, “Optical conductivity from pair density waves,” Physical Review B 95, 014506 (2017). * He _et al._ (2017) Yan He, Yan-Xiao Wang, and Hao Guo, “Establishing gauge invariant linear response of fermionic superfluids with pair fluctuations: A diagrammatic approach,” Physics Letters A 381, 1603–1610 (2017). * Guo _et al._ (2018) Hao Guo, Yan He, and Lianyi He, “Dynamic density structure factor of a unitary fermi gas at finite temperature,” Journal of Physics Communications 2, 045008 (2018). * Baskaran (1991) G Baskaran, “An exactly solvable fermion model: Spinons, holons and a non-fermi liquid phase,” Modern Physics Letters B 5, 643–649 (1991). * Eder _et al._ (2011) R Eder, K Seki, and Y Ohta, “Self-energy and fermi surface of the two-dimensional hubbard model,” Physical Review B 83, 205137 (2011). * Edalati _et al._ (2011a) Mohammad Edalati, Robert G Leigh, and Philip W Phillips, “Dynamically generated mott gap from holography,” Physical review letters 106, 091602 (2011a). * Edalati _et al._ (2011b) Mohammad Edalati, Robert G Leigh, Ka Wai Lo, and Philip W Phillips, “Dynamical gap and cupratelike physics from holography,” Physical Review D 83, 046012 (2011b). * Yang (2020) Kun Yang, “An exactly solvable model of fermi arcs and pseudogap,” arXiv preprint arXiv:2011.01680 (2020). * Phillips _et al._ (2018) Philip W Phillips, Chandan Setty, and Shuyi Zhang, “Absence of a charge diffusion pole at finite energies in an exactly solvable interacting flat-band model in d dimensions,” Physical Review B 97, 195102 (2018). * Chen _et al._ (2005) Qijin Chen, Jelena Stajic, Shina Tan, and Kathryn Levin, “Bcs–bec crossover: From high temperature superconductors to ultracold superfluids,” Physics Reports 412, 1–88 (2005). * She and Zaanen (2009) Jian-Huang She and Jan Zaanen, “Bcs superconductivity in quantum critical metals,” Physical Review B 80, 184518 (2009). * She _et al._ (2011) J-H She, Bas J Overbosch, Y-W Sun, Yan Liu, KE Schalm, John A Mydosh, and Jan Zaanen, “Observing the origin of superconductivity in quantum critical metals,” Physical Review B 84, 144527 (2011). * Moon and Chubukov (2010) Eun-Gook Moon and Andrey Chubukov, “Quantum-critical pairing with varying exponents,” Journal of Low Temperature Physics 161, 263–281 (2010). * Patel and Sachdev (2019) Aavishkar A Patel and Subir Sachdev, “Theory of a planckian metal,” Physical review letters 123, 066601 (2019). * Legros _et al._ (2019) A Legros, S Benhabib, W Tabis, F Laliberté, M Dion, M Lizaire, B Vignolle, D Vignolles, H Raffy, ZZ Li, _et al._ , “Universal t-linear resistivity and planckian dissipation in overdoped cuprates,” Nature Physics 15, 142–147 (2019). * Nakajima _et al._ (2019) Yasuyuki Nakajima, Tristin Metz, Christopher Eckberg, Kevin Kirshenbaum, Alex Hughes, Renxiong Wang, Limin Wang, Shanta R Saha, I-Lin Liu, Nicholas P Butch, _et al._ , “Planckian dissipation and scale invariance in a quantum-critical disordered pnictide,” arXiv preprint arXiv:1902.01034 (2019). * Bruin _et al._ (2013) JAN Bruin, H Sakai, RS Perry, and AP Mackenzie, “Similarity of scattering rates in metals showing t-linear resistivity,” Science 339, 804–807 (2013). * Larkin and Varlamov (2005) Anatoli Larkin and Andrei Varlamov, _Theory of fluctuations in superconductors_ (Clarendon Press, 2005). * Varlamov _et al._ (2018) AA Varlamov, A Galda, and A Glatz, “Fluctuation spectroscopy: From rayleigh-jeans waves to abrikosov vortex clusters,” Reviews of Modern Physics 90, 015009 (2018). * Setty (2019) Chandan Setty, “Glass-induced enhancement of superconducting t c: Pairing via dissipative mediators,” Physical Review B 99, 144523 (2019). * Baggioli _et al._ (2018) Matteo Baggioli, Bikash Padhi, Philip W Phillips, and Chandan Setty, “Conjecture on the butterfly velocity across a quantum phase transition,” Journal of High Energy Physics 2018, 49 (2018). * Zaanen _et al._ (2015) Jan Zaanen, Yan Liu, Ya-Wen Sun, and Koenraad Schalm, _Holographic duality in condensed matter physics_ (Cambridge University Press, 2015).
8k
arxiv_papers
2101.01172
# Spatial Parrondo games with spatially dependent game $A$ Sung Chan Choi Department of Mathematics, University of Utah, 155 S. 1400 E., Salt Lake City, UT 84112, USA. e-mail: [email protected] ###### Abstract Parrondo games with spatial dependence were introduced by Toral (2001) and have been studied extensively. In Toral’s model, $N$ players are arranged in a circle. The players play either game $A$ or game $B$. In game $A$, a randomly chosen player wins or loses one unit according to the toss of a fair coin. In game $B$, which depends on parameters $p_{0},p_{1},p_{2}\in[0,1]$, a randomly chosen player, player $x$ say, wins or loses one unit according to the toss of a $p_{m}$-coin, where $m\in\\{0,1,2\\}$ is the number of nearest neighbors of player $x$ who won their most recent game. In this paper, we replace game $A$ by a spatially dependent game, which we call game $A^{\prime}$, introduced by Xie et al. (2011). In game $A^{\prime}$, two nearest neighbors are chosen at random, and one pays one unit to the other based on the toss of a fair coin. Noting that game $A^{\prime}$ is fair, we say that the Parrondo effect occurs if game $B$ is losing or fair and game $C^{\prime}$, determined by a random or periodic sequence of games $A^{\prime}$ and $B$, is winning. We investigate numerically the region in which the Parrondo effect appears. We give sufficient conditions for the mean profit in game $C^{\prime}$ to converge as $N\to\infty$. Finally, we compare the Parrondo region in the model of Xie et al. with that in the model of Toral. ## 1 Introduction The Parrondo effect, in which there is a reversal in direction in some system parameter when two similar dynamics are combined, is the result of an underlying nonlinearity. It was first described by Spanish physicist J. M. R. Parrondo in 1996 in the context of games of chance: He showed that it is possible to combine two fair or losing games, $A$ and $B$, to produce a winning one, $C$. Here $C$ is the game obtained by playing games $A$ and $B$ in a random or periodic sequence. His motivation was to provide a discrete (in time and space) version of the so-called flashing Brownian ratchet of Ajdari and Prost [1]. Other versions of Parrondo’s games followed, including Toral’s [2] spatially dependent games. These games were modified by Xie et al. [3], and it is the goal of this paper to explore the latter games in greater depth than was done by Ethier and Lee [4]. ### 1.1 Toral’s spatially dependent games Toral [2] introduced what he called cooperative Parrondo games with spatial dependence. (We prefer the term spatially dependent Parrondo games so as to avoid conflict with the field of cooperative game theory.) The games depend on an integer parameter $N\geq 3$, the number of players, and four probability parameters, $p_{0},p_{1},p_{2},p_{3}$. (This is a slight generalization of the model described in the abstract.) The players are arranged in a circle and labeled from 1 to $N$ (so that players 1 and $N$ are adjacent). At each turn, a player is chosen at random to play. Suppose player $x$ is chosen. In game $A$, he tosses a fair coin. In game $B$, he tosses a $p_{m}$-coin (i.e., a coin whose probability of heads is $p_{m}$), where $m\in\\{0,1,2,3\\}$ depends on the winning or losing status of his two nearest neighbors. A player’s status as winner (1) or loser (0) is decided by the result of his most recent game. Specifically, $m=\begin{cases}0&\text{if $x-1$ and $x+1$ are both losers,}\\\ 1&\text{if $x-1$ is a loser and $x+1$ is a winner,}\\\ 2&\text{if $x-1$ is a winner and $x+1$ is a loser,}\\\ 3&\text{if $x-1$ and $x+1$ are both winners,}\end{cases}$ where $N+1:=1$ and $0:=N$ because of the circular arrangement of players. Player $x$ wins one unit with heads and loses one unit with tails. Replacing $(p_{0}.p_{1},p_{2},p_{3})$ by $(p_{0},p_{1},p_{1},p_{2})$ gives the 3-parameter model described in the abstract. These games have been studied in detail in a series of papers by Ethier and Lee [5, 6, 7, 8]. For example, with Toral’s [2] choice of parameters, namely $(p_{0},p_{1},p_{2},p_{3})=(1,0.16,0.16,0.7)$, one can compute the asymptotic profit per turn to the set of $N$ players, for $3\leq N\leq 19$. For $N=5,6$ and $9\leq N\leq 19$, the Parrondo effect (where game $A$ is fair, game $B$ is fair or losing, and the random mixture, game $C:=\frac{1}{2}A+\frac{1}{2}B$, is winning) is present. In the cited papers, a strong law of large numbers and a central limit theorem are obtained. In particular, the asymptotic cumulative profits per turn exist and are the means in the SLLN. Further, it seems clear that these means converges as $N\to\infty$. This has been proved under certain conditions (see Ethier and Lee [7]). ### 1.2 The spatially dependent games of Xie et al. Notice that Toral’s [2] game $A$ is not spatially dependent (i.e., the rules of the game do not depend on the spatial structure of the players). Xie et al. [3] proposed a modification of game $A$ that is spatially dependent as well as being a fair game. To distinguish, we call that game $A^{\prime}$. As before, the games depend on an integer parameter $N\geq 3$, the number of players, and four probability parameters, $p_{0},p_{1},p_{2},p_{3}$. The players are arranged in a circle and labeled from $1$ to $N$ (so that players $1$ and $N$ are adjacent). At each turn, a player is chosen at random to play. Suppose player $x$ is chosen. In game $A^{\prime}$, he chooses one of his two nearest neighbors at random and competes with that neighbor by tossing a fair coin. The results is a transfer of one unit from one of the players to the other, hence the wealth of the set of $N$ players is unchanged. Game $B$ is as before. Player $x$ wins one unit with heads and loses one unit with tails. These games were studied by Xie et al. [3], Li et al. [9], and Ethier and Lee [4]. Only the random mixture case was treated, and convergence of the means has not yet been addressed. Our aim in this paper is to fill in these gaps in the literature. Further, we want to understand this model as well as Toral’s model is understood. We begin by establishing a strong law of large numbers and a central limit theorem, especially in the periodic pattern case. We compute various means numerically and use computer graphics to visualize the Parrondo region. Then we address the issue of convergence of means, which involves certain interacting particle systems. We then establish the convergence, both in the random mixture setting and in the periodic pattern setting, on a large subset of the parameter space. ## 2 SLLN/CLT for the games of Xie et al. In this section, we restate the strong law of large numbers (SLLN) and the central limit theorem (CLT) of Ethier and Lee [10], and we apply them to the Parrondo games of Xie at al. [3]. ### 2.1 SLLN and CLT Ethier and Lee [10] proved an SLLN and a CLT for the Parrondo player’s sequence of profits, motivated by the random mixture $C:=\gamma A+(1-\gamma)B$. A subsequent version in the same paper treats the case of periodic patterns. Consider an irreducible aperiodic Markov chain $\\{X_{n}\\}_{n\geq 0}$ with finite state space $\Sigma_{0}$. It evolves according to the one-step transition matrix ${\bm{P}}=(P_{ij})_{i,j\in\Sigma_{0}}$. Let us denote its unique stationary distribution by the row vector ${\bm{\pi}}=(\pi_{i})_{i\in\Sigma_{0}}$. Let $w:\Sigma_{0}\times\Sigma_{0}\mapsto{\bf R}$ be an arbitrary function, which we write as a matrix ${\bm{W}}=(w(i,j))_{i,j\in\Sigma_{0}}$ and refer to as the payoff matrix. Define the sequences $\\{\xi_{n}\\}_{n\geq 1}$ and $\\{S_{n}\\}_{n\geq 1}$ by $\xi_{n}:=w(X_{n-1},X_{n}),\qquad n\geq 1,$ and $S_{n}:=\xi_{1}+\cdots+\xi_{n},\qquad n\geq 1.$ Let ${\bm{\Pi}}$ denote the square matrix each of whose rows is ${\bm{\pi}}$, and let ${\bm{Z}}:=({\bm{I}}-({\bm{P}}-{\bm{\Pi}}))^{-1}$ denote the fundamental matrix. Denote by $\dot{\bm{P}}$ and $\ddot{\bm{P}}$ the Hadamard (entrywise) products $\bm{P}\circ\bm{W}$ and $\bm{P}\circ\bm{W}\circ\bm{W}$ (so $\dot{P}_{ij}:=P_{ij}w(i,j)$ and $\ddot{P}_{ij}:=P_{ij}w(i,j)^{2}$). Let $\bm{1}:=(1,1,\ldots,1)^{\textsf{T}}$ and define $\mu:=\bm{\pi}\dot{\bm{P}}\bm{1}\quad{\rm and}\quad\sigma^{2}:=\bm{\pi}\ddot{\bm{P}}\bm{1}-(\bm{\pi}\dot{\bm{P}}\bm{1})^{2}+2\bm{\pi}\dot{\bm{P}}(\bm{Z}-\bm{\Pi})\dot{\bm{P}}\bm{1}.$ ###### Theorem 1 (Ethier and Lee [10]). Under the above assumptions, and with the distribution of $X_{0}$ arbitrary, $\frac{S_{n}}{n}\to\mu\;\;{\rm a.s.}$ and, if $\sigma^{2}>0$, $\frac{S_{n}-n\mu}{\sqrt{n\sigma^{2}}}\to_{d}N(0,1).$ If $\mu=0$ and $\sigma^{2}>0$, then $-\infty=\liminf_{n\to\infty}S_{n}<\limsup_{n\to\infty}S_{n}=\infty$ _a.s._ ###### Example 2. To illustrate this theorem, let us consider the original capital-dependent Parrondo games (without a bias parameter). These are single-player games. In game $A$, the player tosses a fair coin. In game $B$, the player tosses a 1/10-coin if capital is divisible by 3 and and a 3/4-coin otherwise. In either case, the player wins one unit with heads and loses one unit with tails. The underlying Markov chain corresponding to game $B$ has state space $\Sigma_{0}:=\\{0,1,2\\}$ and one-step transition matrix $\bm{P}_{B}:=\begin{pmatrix}0&1/10&9/10\\\ 1/4&0&3/4\\\ 3/4&1/4&0\end{pmatrix}.$ Its unique stationary distribution is $\bm{\pi}_{B}=(1/13)(5,2,6)$. The payoff matrix has the form $\bm{W}:=\begin{pmatrix}0&1&-1\\\ -1&0&1\\\ 1&-1&0\end{pmatrix}.$ We find that $\mu_{B}=\bm{\pi}_{B}\dot{\bm{P}}_{B}\bm{1}=0.$ The underlying Markov chain corresponding to game $A$ has the same state space and one-step transition matrix $\bm{P}_{A}:=\begin{pmatrix}0&1/2&1/2\\\ 1/2&0&1/2\\\ 1/2&1/2&0\end{pmatrix}$ with unique stationary distribution $\bm{\pi}_{A}=(1/3)(1,1,1)$. The payoff matrix is the same, and we find that $\mu_{A}=\bm{\pi}_{A}\dot{\bm{P}}_{A}\bm{1}=0,$ a result that is obvious without calculation. Finally, the underlying Markov chain corresponding to game $C:=\frac{1}{2}A+\frac{1}{2}B$ has the same state space and one-step transition matrix $\bm{P}_{C}:=\frac{1}{2}(\bm{P}_{A}+\bm{P}_{B})=\begin{pmatrix}0&3/10&7/10\\\ 3/8&0&5/8\\\ 5/8&3/8&0\end{pmatrix}$ with unique stationary distribution $\bm{\pi}_{C}=(1/709)(245,180,284)$. The payoff matrix is the same, and we find that $\mu_{C}=\bm{\pi}_{C}\dot{\bm{P}}_{C}\bm{1}=\frac{18}{709}\approx 0.0253879.$ This is perhaps the best-known example of Parrondo’s paradox, and the SLLN justifies the conclusion: Two fair games combine to win. We can also derive a CLT, which requires the fundamental matrix $\bm{Z}_{B}:=(\bm{I}-(\bm{P}_{B}-\bm{\Pi}_{B}))^{-1}=\frac{1}{2197}\begin{pmatrix}1725&-38&510\\\ -95&1938&354\\\ 425&118&1654\end{pmatrix}.$ We find that $\sigma^{2}_{B}=\bm{\pi}_{B}\ddot{\bm{P}}_{B}\bm{1}-(\bm{\pi}_{B}\dot{\bm{P}}_{B}\bm{1})^{2}+2\bm{\pi}_{B}\dot{\bm{P}}_{B}(\bm{Z}_{B}-\bm{\Pi}_{B})\dot{\bm{P}}_{B}\bm{1}=\bigg{(}\frac{9}{13}\bigg{)}^{2}\approx 0.479290.$ Similarly, $\bm{Z}_{A}:=(\bm{I}-(\bm{P}_{A}-\bm{\Pi}_{A}))^{-1}=\frac{1}{9}\begin{pmatrix}7&1&1\\\ 1&7&1\\\ 1&1&7\end{pmatrix},$ hence $\sigma^{2}_{A}=\bm{\pi}_{A}\ddot{\bm{P}}_{A}\bm{1}-(\bm{\pi}_{A}\dot{\bm{P}}_{A}\bm{1})^{2}+2\bm{\pi}_{A}\dot{\bm{P}}_{A}(\bm{Z}_{A}-\bm{\Pi}_{A})\dot{\bm{P}}_{A}\bm{1}=1,$ as is obvious without the formula. Finally, $\bm{Z}_{C}:=(\bm{I}-(\bm{P}_{C}-\bm{\Pi}_{C}))^{-1}=\frac{1}{502681}\begin{pmatrix}392265&22884&87532\\\ 23585&408580&70516\\\ 80305&39900&382476\end{pmatrix},$ and we conclude that $\sigma^{2}_{C}=\bm{\pi}_{C}\ddot{\bm{P}}_{C}\bm{1}-(\bm{\pi}_{C}\dot{\bm{P}}_{C}\bm{1})^{2}+2\bm{\pi}_{C}\dot{\bm{P}}_{C}(\bm{Z}_{C}-\bm{\Pi}_{C})\dot{\bm{P}}_{C}\bm{1}=\frac{311313105}{356400829}\approx 0.873492.$ In each case we have a CLT. Next we turn to another SLLN and CLT of Ethier and Lee [10], this one motivated by the case of periodic patterns. Let $\bm{P}_{A}$ and $\bm{P}_{B}$ be one-step transition matrices for Markov chains in a finite state space $\Sigma_{0}$. Fix integers $r,s\geq 1$. Assume that $\bm{P}:=\bm{P}_{A}^{r}\bm{P}_{B}^{s}$, as well as all cyclic permutations of $\bm{P}_{A}^{r}\bm{P}_{B}^{s}$, are ergodic, and let the row vector $\bm{\pi}$ be the unique stationary distribution of $\bm{P}$. Let $\bm{\Pi}$ be the square matrix each of whose rows is equal to $\bm{\pi}$, and let ${\bm{Z}}:=({\bm{I}}-({\bm{P}}-{\bm{\Pi}}))^{-1}$ be the fundamental matrix of $\bm{P}$. Given a real-valued function $w$ on $\Sigma_{0}\times\Sigma_{0}$, define the payoff matrix $\bm{W}:=(w(i,j))_{i,j\in\Sigma_{0}}$. Define $\dot{\bm{P}}_{A}:=\bm{P}_{A}\circ\bm{W}$, $\dot{\bm{P}}_{B}:=\bm{P}_{B}\circ\bm{W}$, $\ddot{\bm{P}}_{A}:=\bm{P}_{A}\circ\bm{W}\circ\bm{W}$, $\ddot{\bm{P}}_{B}:=\bm{P}_{B}\circ\bm{W}\circ\bm{W}$, where $\circ$ denotes the Hadamard (entrywise) product. Let $\mu_{[r,s]}:=\frac{1}{r+s}\bigg{[}\sum_{u=0}^{r-1}{\bm{\pi}}{\bm{P}}_{A}^{u}\dot{\bm{P}}_{A}\bm{1}+\sum_{v=0}^{s-1}{\bm{\pi}}{\bm{P}}_{A}^{r}{\bm{P}}_{B}^{v}\dot{\bm{P}}_{B}\bm{1}\bigg{]},$ and $\displaystyle\sigma_{[r,s]}^{2}$ $\displaystyle=\frac{1}{r+s}\bigg{[}\sum_{u=0}^{r-1}[\bm{\pi}\bm{P}_{A}^{u}\ddot{\bm{P}}_{A}\bm{1}-(\bm{\pi}\bm{P}_{A}^{u}\dot{\bm{P}}_{A}\bm{1})^{2}]$ $\displaystyle\qquad\quad+\sum_{v=0}^{s-1}[\bm{\pi}\bm{P}_{A}^{r}\bm{P}_{B}^{v}\ddot{\bm{P}}_{B}\bm{1}-(\bm{\pi}\bm{P}_{A}^{r}\bm{P}_{B}^{v}\dot{\bm{P}}_{B}\bm{1})^{2}]$ $\displaystyle\qquad\quad{}+2\sum_{0\leq u<v\leq r-1}\bm{\pi}\bm{P}_{A}^{u}\dot{\bm{P}}_{A}(\bm{P}_{A}^{v-u-1}-\bm{\Pi}\bm{P}_{A}^{v})\dot{\bm{P}}_{A}\bm{1}$ $\displaystyle\qquad\quad{}+2\sum_{u=0}^{r-1}\sum_{v=0}^{s-1}\bm{\pi}\bm{P}_{A}^{u}\dot{\bm{P}}_{A}(\bm{P}_{A}^{r-u-1}-\bm{\Pi}\bm{P}_{A}^{r})\bm{P}_{B}^{v}\dot{\bm{P}}_{B}\bm{1}$ $\displaystyle\qquad\quad{}+2\sum_{0\leq u<v\leq s-1}\bm{\pi}\bm{P}_{A}^{r}\bm{P}_{B}^{u}\dot{\bm{P}}_{B}(\bm{P}_{B}^{v-u-1}-\bm{\Pi}\bm{P}_{A}^{r}\bm{P}_{B}^{v})\dot{\bm{P}}_{B}\bm{1}$ $\displaystyle\qquad\quad{}+2\bigg{(}\sum_{u=0}^{r-1}\sum_{v=0}^{r-1}\bm{\pi}\bm{P}_{A}^{u}\dot{\bm{P}}_{A}\bm{P}_{A}^{r-u-1}\bm{P}_{B}^{s}(\bm{Z}-\bm{\Pi})\bm{P}_{A}^{v}\dot{\bm{P}}_{A}\bm{1}$ $\displaystyle\qquad\qquad\quad{}+\sum_{u=0}^{r-1}\sum_{v=0}^{s-1}\bm{\pi}\bm{P}_{A}^{u}\dot{\bm{P}}_{A}\bm{P}_{A}^{r-u-1}\bm{P}_{B}^{s}(\bm{Z}-\bm{\Pi})\bm{P}_{A}^{r}\bm{P}_{B}^{v}\dot{\bm{P}}_{B}\bm{1}$ $\displaystyle\qquad\qquad\quad{}+\sum_{u=0}^{s-1}\sum_{v=0}^{r-1}\bm{\pi}\bm{P}_{A}^{r}\bm{P}_{B}^{u}\dot{\bm{P}}_{B}\bm{P}_{B}^{s-u-1}(\bm{Z}-\bm{\Pi})\bm{P}_{A}^{v}\dot{\bm{P}}_{A}\bm{1}$ $\displaystyle\qquad\qquad\quad{}+\sum_{u=0}^{s-1}\sum_{v=0}^{s-1}\bm{\pi}\bm{P}_{A}^{r}\bm{P}_{B}^{u}\dot{\bm{P}}_{B}\bm{P}_{B}^{s-u-1}(\bm{Z}-\bm{\Pi})\bm{P}_{A}^{r}\bm{P}_{B}^{v}\dot{\bm{P}}_{B}\bm{1}\bigg{)}\bigg{]},$ where $\bm{1}$ denotes a column vector of $1$s with entries indexed by $\Sigma_{0}$. Let $\\{X_{n}\\}_{n\geq 0}$ be a nonhomogeneous Markov chain in $\Sigma_{0}$ with one-step transition matrices $\bm{P}_{A},\ldots,\bm{P}_{A}$ $(r\text{ times})$, $\bm{P}_{B},\ldots,\bm{P}_{B}$ $(s\text{ times})$, $\bm{P}_{A},\ldots,\bm{P}_{A}$ $(r\text{ times})$, $\bm{P}_{B},\ldots,\bm{P}_{B}$ $(s\text{ times})$, and so on. For each $n\geq 1$, define $\xi_{n}:=w(X_{n-1},X_{n})$ and $S_{n}:=\xi_{1}+\cdots+\xi_{n}$. ###### Theorem 3 (Ethier and Lee [10]). Under the above assumptions, and with the distribution of $X_{0}$ arbitrary, $\frac{S_{n}}{n}\to\mu_{[r,s]}\;\;{\rm a.s.}$ and, if $\sigma_{[r,s]}^{2}>0$, then $\frac{S_{n}-n\mu_{[r,s]}}{\sqrt{n\sigma_{[r,s]}^{2}}}\to_{d}N(0,1)\text{ as }n\to\infty.$ ###### Example 4. To illustrate this result, we consider the capital-dependent Parrondo games as above, and we take $r=s=2$. Then $\bm{P}=\bm{P}_{A}^{2}\bm{P}_{B}^{2}=\frac{1}{320}\begin{pmatrix}162&59&99\\\ 151&58&111\\\ 111&47&162\end{pmatrix}.$ Its unique stationary distribution is $\bm{\pi}=(1/6357)(2783,1075,2499)$, and the fundamental matrix is $\bm{Z}=\frac{1}{525348837}\begin{pmatrix}569627023&10027235&-54305421\\\ 22416463&532826915&-29894541\\\ -58953137&-14383645&598685619\end{pmatrix}.$ In this example, $\dot{\bm{P}}_{A}\bm{1}=\bm{0}$, $\ddot{\bm{P}}_{A}=\bm{P}_{A}$, and $\ddot{\bm{P}}_{B}=\bm{P}_{B}$, and this simplifies the mean and variance formulas considerably. Specifically, we have $\mu_{[2,2]}=\frac{1}{4}\bm{\pi}\bm{P}_{A}^{2}(\bm{I}+\bm{P}_{B})\dot{\bm{P}}_{B}\bm{1}$ and $\displaystyle\sigma_{[2,2]}^{2}$ $\displaystyle=\frac{1}{4}\big{[}2+2-(\bm{\pi}\bm{P}_{A}^{2}\dot{\bm{P}}_{B}\bm{1})^{2}-(\bm{\pi}\bm{P}_{A}^{2}\bm{P}_{B}\dot{\bm{P}}_{B}\bm{1})^{2}$ $\displaystyle\quad+2\bm{\pi}\dot{\bm{P}}_{A}(\bm{P}_{A}-\bm{\Pi}\bm{P}_{A}^{2})(\bm{I}+\bm{P}_{B})\dot{\bm{P}}_{B}\bm{1}$ $\displaystyle\quad+2\bm{\pi}\bm{P}_{A}\dot{\bm{P}}_{A}(\bm{I}-\bm{\Pi}\bm{P}_{A}^{2})(\bm{I}+\bm{P}_{B})\dot{\bm{P}}_{B}\bm{1}$ $\displaystyle\quad+2\bm{\pi}\bm{P}_{A}^{2}\dot{\bm{P}}_{B}(\bm{I}-\bm{\Pi}\bm{P}_{A}^{2}\bm{P}_{B})\dot{\bm{P}}_{B}\bm{1}$ $\displaystyle\quad+2\bm{\pi}\dot{\bm{P}}_{A}\bm{P}_{A}\bm{P}_{B}^{2}(\bm{Z}-\bm{\Pi})\bm{P}_{A}^{2}(\bm{I}+\bm{P}_{B})\dot{\bm{P}}_{B}\bm{1}$ $\displaystyle\quad+2\bm{\pi}\bm{P}_{A}\dot{\bm{P}}_{A}\bm{P}_{B}^{2}(\bm{Z}-\bm{\Pi})\bm{P}_{A}^{2}(\bm{I}+\bm{P}_{B})\dot{\bm{P}}_{B}\bm{1}$ $\displaystyle\quad+2\bm{\pi}\bm{P}_{A}^{2}\dot{\bm{P}}_{B}\bm{P}_{B}(\bm{Z}-\bm{\Pi})\bm{P}_{A}^{2}(\bm{I}+\bm{P}_{B})\dot{\bm{P}}_{B}\bm{1}$ $\displaystyle\quad+2\bm{\pi}\bm{P}_{A}^{2}\bm{P}_{B}\dot{\bm{P}}_{B}(\bm{Z}-\bm{\Pi})\bm{P}_{A}^{2}(\bm{I}+\bm{P}_{B})\dot{\bm{P}}_{B}\bm{1}\big{]}.$ We conclude that $\mu_{[2,2]}=\frac{4}{163}\approx 0.0245399\quad\text{and}\quad\sigma_{[2,2]}^{2}=\frac{1923037543}{2195688729}\approx 0.875824.$ These numbers are consistent with Ethier and Lee [10]. ### 2.2 Application to game $B$ The Markov chain formalized by Mihailović and Rajković [11] keeps track of the status (loser or winner, 0 or 1) of each of the $N\geq 3$ players of game $B$. Its state space is the product space $\\{\eta=(\eta(1),\eta(2),\ldots,\eta(N)):\eta(x)\in\\{0,1\\}{\rm\ for\ }x=1,\ldots,N\\}=\\{0,1\\}^{N}$ with $2^{N}$ states. Let $m_{x}(\eta):=2\eta(x-1)+\eta(x+1)\in\\{0,1,2,3\\}$. Of course $\eta(0):=\eta(N)$ and $\eta(N+1):=\eta(1)$ because of the circular arrangement of players. Also, let $\eta_{x}$ be the element of $\\{0,1\\}^{N}$ equal to $\eta$ except at the $x$th coordinate. For example, $\eta_{1}:=(1-\eta(1),\eta(2),\eta(3),\ldots,\eta(N))$. The one-step transition matrix $\bm{P}_{B}$ for this Markov chain depends not only on $N$ but on four parameters, $p_{0},p_{1},p_{2},p_{3}\in[0,1]$. It has the form $P_{B}(\eta,\eta_{x}):=\begin{cases}N^{-1}p_{m_{x}(\eta)}&\text{if $\eta(x)=0$,}\\\ N^{-1}q_{m_{x}(\eta)}&\text{if $\eta(x)=1$,}\end{cases}\qquad x=1,\ldots,N,\;\eta\in\\{0,1\\}^{N},$ (1) and $P_{B}(\eta,\eta):=N^{-1}\bigg{(}\sum_{x:\eta(x)=0}q_{m_{x}(\eta)}+\sum_{x:\eta(x)=1}p_{m_{x}(\eta)}\bigg{)},\qquad\eta\in\\{0,1\\}^{N},$ (2) where $q_{m}:=1-p_{m}$ for $m=0,1,2,3$ and empty sums are 0. The Markov chain is irreducible and aperiodic if $0<p_{m}<1$ for $m=0,1,2,3$. Under slightly weaker assumptions (see Ethier and Lee [7]), the Markov chain is ergodic, which suffices. For example, if $p_{0}$ is arbitrary and $0<p_{m}<1$ for $m=1,2,3$, or if $0<p_{m}<1$ for $m=0,1,2$ and $p_{3}$ is arbitrary, then ergodicity holds. It appears at first glance that the theorem does not apply in the context of game $B$ because the payoffs are not completely specified by the one-step transitions of the Markov chain. Specifically, a transition from a state $\eta$ to itself results whenever a loser loses or a winner wins, so the transition does not determine the payoff. Our original Markov chain has state space $\\{0,1\\}^{N}$ and its one-step transition matrix $\bm{P}_{B}$ is given by (1) and (2). Assuming it is ergodic, let $\bm{\pi}_{B}$ denote its unique stationary distribution. The approach in Ethier and Lee [5] augments the state space, letting $\Sigma^{*}:=\\{0,1\\}^{N}\times\\{1,2,\ldots,N\\}$ and keeping track not only of the status of each player as described by $\eta\in\\{0,1\\}^{N}$ but also of the label of the next player to play, say $x$. The new one-step transition matrix $\bm{P}_{B}^{*}$ can be determined, as can its unique stationary distribution $\bm{\pi}_{B}^{*}$, and the theorem applies. However, there is a drawback to this approach, namely that it is not clear that the variance parameter $(\sigma^{*})^{2}$ is the same as the original one, $\sigma^{2}$. (It is easy to verify that $\mu^{*}=\mu$.) Therefore, we take a different approach, namely the one used by Ethier and Lee [12] in their study of two-dimensional spatial models. Here a different augmentation of $\\{0,1\\}^{N}$ is more effective. We let $\Sigma^{\circ}:=\\{0,1\\}^{N}\times\\{-1,1\\}$ and keep track not only of $\eta\in\\{0,1\\}^{N}$ but also of the profit from the last game played, say $s\in\\{-1,1\\}$. The new one-step transition matrix $\bm{P}_{B}^{\circ}$ has the form, for every $(\eta,s)\in\Sigma^{\circ}$, $P_{B}^{\circ}((\eta,s),(\eta_{x},1)):=\begin{cases}N^{-1}p_{m_{x}(\eta)}&\text{if $\eta(x)=0$,}\\\ 0&\text{if $\eta(x)=1$,}\end{cases}$ $P_{B}^{\circ}((\eta,s),(\eta_{x},-1)):=\begin{cases}0&\text{if $\eta(x)=0$,}\\\ N^{-1}q_{m_{x}(\eta)}&\text{if $\eta(x)=1$,}\end{cases}$ for $x=1,\ldots,N$, and $P_{B}^{\circ}((\eta,s),(\eta,1)):=N^{-1}\sum_{x:\eta(x)=1}p_{m_{x}(\eta)},$ $P_{B}^{\circ}((\eta,s),(\eta,-1)):=N^{-1}\sum_{x:\eta(x)=0}q_{m_{x}(\eta)},$ where $q_{m}:=1-p_{m}$ for $m=0,1,2,3,4$ and $m_{x}(\eta)=2\eta(x-1)+\eta(x+1)$. There are two inaccessible states, $(\bm{0},1)$ and $(\bm{1},-1)$, but the Markov chain remains ergodic. Let $\bm{\pi}_{B}^{\circ}$ denote the unique stationary distribution, which has entry 0 at each of the two inaccessible states. The payoff function $w^{\circ}$ can now be defined by $w^{\circ}((\eta,s),(\eta_{x},t))=t\text{ if $\eta(x)=(1-t)/2$,}\qquad w^{\circ}((\eta,s),(\eta,t))=t$ for all $(\eta,s)\in\Sigma^{\circ}$, $x=1,2,\ldots,N$, and $t\in\\{-1,1\\}$, and $w^{\circ}=0$ otherwise. This allows us to define the matrix $\bm{W}^{\circ}$ and then $\dot{\bm{P}}_{B}^{\circ}:={\bm{P}}_{B}^{\circ}\circ\bm{W}^{\circ}$ and $\ddot{\bm{P}}_{B}^{\circ}:={\bm{P}}_{B}^{\circ}\circ\bm{W}^{\circ}\circ\bm{W}^{\circ}$, the Hadamard (or entrywise) products. Theorem 1 yields the following. Let $0<p_{m}<1$ for $m=0,1,2$ or for $m=1,2,3$, so that the Markov chain with one-step transition matrix $\bm{P}_{B}^{\circ}$ is ergodic, and let the row vector $\bm{\pi}_{B}^{\circ}$ be its unique stationary distribution. Define $\mu_{B}^{\circ}=\bm{\pi}_{B}^{\circ}\dot{\bm{P}}_{B}^{\circ}\bm{1},\qquad(\sigma_{B}^{\circ})^{2}=\bm{\pi}_{B}^{\circ}\ddot{\bm{P}}_{B}^{\circ}\bm{1}-(\bm{\pi}_{B}^{\circ}\dot{\bm{P}}_{B}^{\circ}\bm{1})^{2}+2\bm{\pi}_{B}^{\circ}\dot{\bm{P}}_{B}^{\circ}(\bm{Z}_{B}^{\circ}-\bm{1}\bm{\pi}_{B}^{\circ})\dot{\bm{P}}_{B}^{\circ}\bm{1}.$ where $\bm{1}$ denotes a column vector of $1$s with entries indexed by $\Sigma_{B}^{\circ}$ and $\bm{Z}_{B}^{\circ}:=(\bm{I}-(\bm{P}_{B}^{\circ}-\bm{1}\bm{\pi}_{B}^{\circ}))^{-1}$ is the fundamental matrix. (Notice that $\bm{1}\bm{\pi}_{B}^{\circ}$ is the square matrix each of whose rows is equal to $\bm{\pi}_{B}^{\circ}$.) Let $\\{X_{n}^{\circ}\\}_{n\geq 0}$ be a time-homogeneous Markov chain in $\Sigma^{\circ}$ with one-step transition matrix $\bm{P}_{B}^{\circ}$, and let the initial distribution be arbitrary. For each $n\geq 1$, define $\xi_{n}:=w^{\circ}(X_{n-1}^{\circ},X_{n}^{\circ})$ and $S_{n}:=\xi_{1}+\cdots+\xi_{n}$. ###### Theorem 5. Under the above assumptions, and with the initial distribution arbitrary, $\lim_{n\to\infty}\frac{S_{n}}{n}=\mu_{B}^{\circ}\;\;\emph{a.s.}$ and, if $(\sigma_{B}^{\circ})^{2}>0$, then $\frac{S_{n}-n\mu_{B}^{\circ}}{\sqrt{n(\sigma_{B}^{\circ})^{2}}}\to_{d}N(0,1)\text{ as }n\to\infty.$ We next show that there is a simpler expression for this mean and variance. Let us define $\mu_{B}:=\bm{\pi}_{B}\dot{\bm{P}}_{B}\bm{1},\qquad\sigma_{B}^{2}:=\bm{\pi}_{B}\ddot{\bm{P}}_{B}\bm{1}-(\bm{\pi}_{B}\dot{\bm{P}}_{B}\bm{1})^{2}+2\bm{\pi}_{B}\dot{\bm{P}}_{B}(\bm{Z}_{B}-\bm{1}\bm{\pi}_{B})\dot{\bm{P}}_{B}\bm{1},\\\ $ where $\bm{1}$ is the column vector of 1s of the appropriate dimension, $\dot{\bm{P}}_{B}$ is $\bm{P}_{B}$ with each $q_{m}$ replaced by $-q_{m}$, and $\ddot{\bm{P}}_{B}=\bm{P}_{B}$. This “rule of thumb” for $\dot{\bm{P}}_{B}$ requires some caution: It must be applied before any simplifications to $\bm{P}_{B}$ are made using $q_{m}=1-p_{m}$. Of course, $\bm{\pi}_{B}$ is the unique stationary distribution, and $\bm{Z}_{B}$ is the fundamental matrix, of $\bm{P}_{B}$. ###### Theorem 6. $\mu_{B}^{\circ}=\mu_{B}$ and $(\sigma_{B}^{\circ})^{2}=\sigma_{B}^{2}.$ ###### Remark. The proof is as in Ethier and Lee [12]. Let us explain its significance. $\mu_{B}^{\circ}$ and $(\sigma_{B}^{\circ})^{2}$ are the mean and variance that appear in the SLLN and the CLT. They are defined in terms of $\bm{P}_{B}^{\circ}$, the augmented one-step transition matrix. $\mu_{B}$ and $\sigma_{B}^{2}$ are defined analogously in terms of $\bm{P}_{B}$, the original one-step transition matrix, using the rule of thumb. ### 2.3 Application to game $C^{\prime}:=\gamma A^{\prime}+(1-\gamma)B$ This case is not much different from the previous one. Notice that, if game $A^{\prime}$ is played, the profit to the set of $N$ players is 0, since game $A^{\prime}$ simply redistributes capital among the players. So we can use the same augmentation of the state space as before, except that 0 is now a possible value of the profit from the last game played. In other words, $\Sigma^{\circ}:=\\{0,1\\}^{N}\times\\{-1,0,1\\}$. The transition probabilities require some new notation. Let $\eta^{x,x\pm 1,\pm 1}$ be the element of $\\{0,1\\}^{N}$ representing the players’ status after player $x$ plays player $x\pm 1$ and wins (1) or loses ($-1$). Of course player 0 is player $N$ and player $N+1$ is player 1. E.g., $\eta^{1,2,-1}=(0,1,\eta(3),\ldots,\eta(N))$ (player 1 competes against player 2 and loses, leaving player 1 a loser and player 2 a winner, regardless of their previous status). Then $\displaystyle P_{C^{\prime}}^{\circ}((\eta,s),(\eta_{x},1))$ $\displaystyle=\begin{cases}(1-\gamma)N^{-1}p_{m_{x}(\eta)}&\text{if $\eta(x)=0$,}\\\ 0&\text{if $\eta(x)=1$,}\end{cases}$ (3) $\displaystyle P_{C^{\prime}}^{\circ}((\eta,s),(\eta_{x},-1))$ $\displaystyle=\begin{cases}0&\text{if $\eta(x)=0$,}\\\ (1-\gamma)N^{-1}q_{m_{x}(\eta)}&\text{if $\eta(x)=1$,}\end{cases}$ (4) $\displaystyle P_{C^{\prime}}^{\circ}((\eta,s),(\eta^{x,x-1,-1},0))$ $\displaystyle=\gamma(4N)^{-1},$ (5) $\displaystyle P_{C^{\prime}}^{\circ}((\eta,s),(\eta^{x,x-1,1},0))$ $\displaystyle=\gamma(4N)^{-1},$ (6) $\displaystyle P_{C^{\prime}}^{\circ}((\eta,s),(\eta^{x,x+1,-1},0))$ $\displaystyle=\gamma(4N)^{-1},$ (7) $\displaystyle P_{C^{\prime}}^{\circ}((\eta,s),(\eta^{x,x+1,1},0))$ $\displaystyle=\gamma(4N)^{-1},$ (8) for $x=1,2,\ldots,N$, and $\displaystyle P_{C^{\prime}}^{\circ}((\eta,s),(\eta,1))$ $\displaystyle=(1-\gamma)N^{-1}\sum_{x:\eta(x)=1}p_{m_{x}(\eta)},$ (9) $\displaystyle P_{C^{\prime}}^{\circ}((\eta,s),(\eta,-1))$ $\displaystyle=(1-\gamma)N^{-1}\sum_{x:\eta(x)=0}q_{m_{x}(\eta)}.$ (10) Of course, we could also define $\bm{P}_{C^{\prime}}=\gamma\bm{P}_{A^{\prime}}+(1-\gamma)\bm{P}_{B}$. We notice that Theorems 5 and 6 hold in this framework without change. Let $0<p_{m}<1$ for $m=0,1,2$ or for $m=1,2,3$, so that the Markov chain with one-step transition matrix $\bm{P}_{C^{\prime}}^{\circ}:=\gamma\bm{P}_{A^{\prime}}^{\circ}+(1-\gamma)\bm{P}_{B}^{\circ}$ is ergodic, and let the row vector $\bm{\pi}_{C^{\prime}}^{\circ}$ be its unique stationary distribution. Define $\displaystyle\mu_{(\gamma,1-\gamma)^{\prime}}^{\circ}$ $\displaystyle=\bm{\pi}_{C^{\prime}}^{\circ}\dot{\bm{P}}_{C^{\prime}}^{\circ}\bm{1},$ $\displaystyle(\sigma_{(\gamma,1-\gamma)^{\prime}}^{\circ})^{2}$ $\displaystyle=\bm{\pi}_{C^{\prime}}^{\circ}\ddot{\bm{P}}_{C^{\prime}}^{\circ}\bm{1}-(\bm{\pi}_{C^{\prime}}^{\circ}\dot{\bm{P}}_{C^{\prime}}^{\circ}\bm{1})^{2}+2\bm{\pi}_{C^{\prime}}^{\circ}\dot{\bm{P}}_{C^{\prime}}^{\circ}(\bm{Z}_{C^{\prime}}^{\circ}-\bm{1}\bm{\pi}_{C^{\prime}}^{\circ})\dot{\bm{P}}_{C^{\prime}}^{\circ}\bm{1}.$ where $\bm{1}$ denotes a column vector of $1$s with entries indexed by $\Sigma^{\circ}$ and $\bm{Z}_{C^{\prime}}^{\circ}:=(\bm{I}-(\bm{P}_{C^{\prime}}^{\circ}-\bm{1}\bm{\pi}_{C^{\prime}}^{\circ}))^{-1}$ is the fundamental matrix. (Notice that $\bm{1}\bm{\pi}_{C^{\prime}}^{\circ}$ is the square matrix each of whose rows is equal to $\bm{\pi}_{C^{\prime}}^{\circ}$.) Let $\\{X_{n}^{\circ}\\}_{n\geq 0}$ be a time-homogeneous Markov chain in $\Sigma^{\circ}$ with one-step transition matrix $\bm{P}_{C^{\prime}}^{\circ}$. For each $n\geq 1$, define $\xi_{n}:=w^{\circ}(X_{n-1}^{\circ},X_{n}^{\circ})$ and $S_{n}:=\xi_{1}+\cdots+\xi_{n}$. ###### Theorem 7. Under the above assumptions, and with the distribution of $X_{0}$ arbitrary, $\lim_{n\to\infty}\frac{S_{n}}{n}=\mu_{(\gamma,1-\gamma)^{\prime}}^{\circ}\;\;\emph{a.s.}$ and, if $(\sigma_{(\gamma,1-\gamma)^{\prime}}^{\circ})^{2}>0$, then $\frac{S_{n}-n\mu_{(\gamma,1-\gamma)^{\prime}}^{\circ}}{\sqrt{n(\sigma_{(\gamma,1-\gamma)^{\prime}}^{\circ})^{2}}}\to_{d}N(0,1)\text{ as }n\to\infty.$ Let us define $\displaystyle\mu_{(\gamma,1-\gamma)^{\prime}}$ $\displaystyle:=\bm{\pi}_{C^{\prime}}\dot{\bm{P}}_{C^{\prime}}\bm{1},$ $\displaystyle\sigma_{(\gamma,1-\gamma)^{\prime}}^{2}$ $\displaystyle:=\bm{\pi}_{C^{\prime}}\ddot{\bm{P}}_{C^{\prime}}\bm{1}-(\bm{\pi}_{C^{\prime}}\dot{\bm{P}}_{C^{\prime}}\bm{1})^{2}+2\bm{\pi}_{C^{\prime}}\dot{\bm{P}}_{C^{\prime}}(\bm{Z}_{C^{\prime}}-\bm{1}\bm{\pi}_{C^{\prime}})\dot{\bm{P}}_{C^{\prime}}\bm{1},$ where $\bm{1}$ is the column vector of 1s of the appropriate dimension, and since $\dot{\bm{P}}_{A^{\prime}}$ can be defined to be $\bm{0}$, $\dot{\bm{P}}_{C^{\prime}}$ is $(1-\gamma)\dot{\bm{P}}_{B}$ with each $q_{m}$ replaced by $-q_{m}$, and $\ddot{\bm{P}}_{C^{\prime}}=(1-\gamma)\bm{P}_{B}$. This “rule of thumb” for $\dot{\bm{P}}_{C^{\prime}}$ requires some caution: It must be applied before any simplifications to $\bm{P}_{C^{\prime}}$ are made using $q_{m}=1-p_{m}$. Of course, $\bm{\pi}_{C^{\prime}}$ is the unique stationary distribution, and $\bm{Z}_{C^{\prime}}$ is the fundamental matrix, of $\bm{P}_{C^{\prime}}$. Notice that $\dot{\bm{P}}_{A^{\prime}}^{\circ}=\bm{0}$, so $\dot{\bm{P}}_{C^{\prime}}^{\circ}=(1-\gamma)\dot{\bm{P}}_{B}^{\circ}$ and $\ddot{\bm{P}}_{C^{\prime}}^{\circ}=(1-\gamma)\ddot{\bm{P}}_{B}^{\circ}$. ###### Theorem 8. $\mu_{(\gamma,1-\gamma)^{\prime}}^{\circ}=\mu_{(\gamma,1-\gamma)^{\prime}}$ (11) and $(\sigma_{(\gamma,1-\gamma)^{\prime}}^{\circ})^{2}=\sigma_{(\gamma,1-\gamma)^{\prime}}^{2}.$ (12) ### 2.4 Application to game $C^{\prime}:=(A^{\prime})^{r}B^{s}$ Next we need versions of the SLLN and the CLT suited to game $C^{\prime}:=(A^{\prime})^{r}B^{s}$. The key result is Theorem 3. For the same reason as before, the theorem does not apply directly to $\bm{P}_{A^{\prime}}$ and $\bm{P}_{B}$. Therefore we again consider the Markov chains in the augmented state space $\Sigma^{\circ}:=\\{0,1\\}^{N}\times\\{-1,0,1\\}$ with one-step transition matrix $\bm{P}_{A^{\prime}}^{\circ}$ and $\bm{P}_{B}^{\circ}$. The definitions are as in (3)–(10) with $\gamma=1$ or $\gamma=0$. With $\bm{W}^{\circ}$ as before, the theorem applies. Fix $r,s\geq 1$. Assume that $\bm{P}^{\circ}:=(\bm{P}_{A^{\prime}}^{\circ})^{r}(\bm{P}_{B}^{\circ})^{s}$, as well as all cyclic permutations of $(\bm{P}_{A^{\prime}}^{\circ})^{r}(\bm{P}_{B}^{\circ})^{s}$, are ergodic, and let the row vector $\bm{\pi}^{\circ}$ be the unique stationary distribution of $\bm{P}^{\circ}$. Let $\mu_{[r,s]^{\prime}}^{\circ}:=\frac{1}{r+s}\sum_{v=0}^{s-1}\bm{\pi}^{\circ}(\bm{P}_{A^{\prime}}^{\circ})^{r}(\bm{P}_{B}^{\circ})^{v}\dot{\bm{P}}_{B}^{\circ}\bm{1}$ and $\displaystyle(\sigma_{[r,s]^{\prime}}^{\circ})^{2}$ $\displaystyle=\frac{1}{r+s}\bigg{\\{}s-\sum_{v=0}^{s-1}(\bm{\pi}^{\circ}(\bm{P}_{A^{\prime}}^{\circ})^{r}(\bm{P}_{B}^{\circ})^{v}\dot{\bm{P}}_{B}^{\circ}\bm{1})^{2}$ $\displaystyle\qquad\;\;{}+2\bigg{[}\sum_{0\leq u<v\leq s-1}\bm{\pi}^{\circ}(\bm{P}_{A^{\prime}}^{\circ})^{r}(\bm{P}_{B}^{\circ})^{u}\dot{\bm{P}}_{B}^{\circ}((\bm{P}_{B}^{\circ})^{v-u-1}$ $\displaystyle\qquad\qquad\qquad\qquad\qquad\qquad{}-\bm{1}\bm{\pi}^{\circ}(\bm{P}_{A^{\prime}}^{\circ})^{r}(\bm{P}_{B}^{\circ})^{v})\dot{\bm{P}}_{B}^{\circ}\bm{1}$ $\displaystyle\qquad\;\;{}+\sum_{u=0}^{s-1}\sum_{v=0}^{s-1}\bm{\pi}^{\circ}(\bm{P}_{A^{\prime}}^{\circ})^{r}(\bm{P}_{B}^{\circ})^{u}\dot{\bm{P}}_{B}^{\circ}(\bm{P}_{B}^{\circ})^{s-u-1}(\bm{Z}^{\circ}-\bm{1}\bm{\pi}^{\circ})(\bm{P}_{A^{\prime}}^{\circ})^{r}(\bm{P}_{B}^{\circ})^{v}\dot{\bm{P}}_{B}^{\circ}\bm{1}\bigg{]}\bigg{\\}}.$ Let $\\{X_{n}^{\circ}\\}_{n\geq 0}$ be a nonhomogeneous Markov chain in $\Sigma^{\circ}$ with one-step transition matrices $\bm{P}_{A^{\prime}}^{\circ},\ldots,\bm{P}_{A^{\prime}}^{\circ}$ $(r\text{ times})$, $\bm{P}_{B}^{\circ},\ldots,\bm{P}_{B}^{\circ}$ $(s\text{ times})$, $\bm{P}_{A^{\prime}}^{\circ},\ldots,\bm{P}_{A^{\prime}}^{\circ}$ $(r\text{ times})$, $\bm{P}_{B}^{\circ},\ldots,\bm{P}_{B}^{\circ}$ $(s\text{ times})$, and so on. For each $n\geq 1$, define $\xi_{n}:=w^{\circ}(X_{n-1}^{\circ},X_{n}^{\circ})$ and $S_{n}:=\xi_{1}+\cdots+\xi_{n}$. ###### Theorem 9. Under the above assumptions, and with the distribution of $X_{0}$ arbitrary, $\frac{S_{n}}{n}\to\mu_{[r,s]^{\prime}}^{\circ}\;\;\emph{a.s.}$ and, if $(\sigma_{[r,s]^{\prime}}^{\circ})^{2}>0$, then $\frac{S_{n}-n\mu_{[r,s]^{\prime}}^{\circ}}{\sqrt{n(\sigma_{[r,s]^{\prime}}^{\circ})^{2}}}\to_{d}N(0,1)\text{ as }n\to\infty.$ Again there are simpler expressions for this mean and variance. We define $\mu_{[r,s]^{\prime}}$ in terms of $\bm{\pi}$, $\bm{P}_{A^{\prime}}$, $\bm{P}_{B}$, and $\dot{\bm{P}}_{B}$ in the same way that $\mu_{[r,s]^{\prime}}^{\circ}$ was defined in terms of $\bm{\pi}^{\circ}$, $\bm{P}_{A^{\prime}}^{\circ}$, $\bm{P}_{B}^{\circ}$, and $\dot{\bm{P}}_{B}^{\circ}$. ($\dot{\bm{P}}_{B}$ is defined by the rule of thumb.) Finally, $\sigma_{[r,s]^{\prime}}^{2}$ is defined analogously to $(\sigma_{[r,s]^{\prime}}^{\circ})^{2}$. ###### Theorem 10. $\mu_{[r,s]^{\prime}}^{\circ}=\mu_{[r,s]^{\prime}}$ (13) and $(\sigma_{[r,s]^{\prime}}^{\circ})^{2}=\sigma_{[r,s]^{\prime}}^{2}.$ (14) ###### Proof. Eqs. (13) and (14) are proved in the same way as (11) and (12). ∎ ## 3 Numerical computations In this section, we compute various means numerically by using the reduced state space and use computer graphics to visualize the Parrondo region of the Parrondo games of Xie at al. [3]. ### 3.1 State-space reduction Let us begin by explaining what we mean by state-space reduction, which is an important method for simplifying our computations. In general, consider an equivalence relation $\sim$ on a finite set $E$. By definition, $\sim$ is reflexive ($x\sim x$), symmetric ($x\sim y$ implies $y\sim x$), and transitive ($x\sim y$ and $y\sim z$ imply $x\sim z$). It is well known that an equivalence relation partitions the set $E$ into equivalence classes. The set of all equivalence classes, called the quotient set, will be denoted by $\bar{E}$. Let us write $[x]:=\\{y\in E:y\sim x\\}$ for the equivalence class containing $x$. Then $\bar{E}=\\{[x]:x\in E\\}$. Now suppose $X_{0},X_{1},X_{2},\ldots$ is a (time-homogeneous) Markov chain in $E$ with transition matrix $\bm{P}$. In particular, $P(x,y)={\rm P}(X_{t+1}=y\mid X_{t}=x)$ for all $x,y\in E$ and $t=0,1,2,\ldots$. Under what conditions on $\bm{P}$ is $[X_{0}],[X_{1}],[X_{2}],\ldots$ a Markov chain in the “reduced” state space $\bar{E}$? A sufficient condition, apparently due to Kemeny and Snell [13, p. 124], is that $\bm{P}$ be lumpable with respect to $\sim$. By definition, this means that, for all $x,x^{\prime},y\in E$, $x\sim x^{\prime}\quad\text{implies}\quad\sum_{y^{\prime}\in[y]}P(x,y^{\prime})=\sum_{y^{\prime}\in[y]}P(x^{\prime},y^{\prime}).$ (15) Moreover, if (15) holds, then the Markov chain $[X_{0}],[X_{1}],[X_{2}],\ldots$ in $\bar{E}$ has transition matrix $\bar{\bm{P}}$ given by $\bar{P}([x],[y]):=\sum_{y^{\prime}\in[y]}P(x,y^{\prime}).$ (16) Notice that (15) ensures that (16) is well defined. For Parrondo games with one-dimensional spatial dependence, the state space, assuming $N\geq 3$ players, is $\\{\eta=(\eta(1),\eta(2),\ldots,\eta(N)):\eta(x)\in\\{0,1\\}{\rm\ for\ }x=1,2,\ldots,N\\}=\\{0,1\\}^{N},$ which has $2^{N}$ states. A state $\eta\in\\{0,1\\}^{N}$ describes the status of each of the $N$ players, 0 for losers and 1 for winners. We can also think of $\\{0,1\\}^{N}$ as the set of $N$-bit binary representations of the integers $0,1,\ldots,2^{N}-1$, thereby giving a natural ordering to the vectors in $\\{0,1\\}^{N}$. Ethier and Lee [5] used the following equivalence relation on $\\{0,1\\}^{N}$: $\eta\sim\zeta$ if and only if $\zeta=\eta_{\sigma}:=(\eta(\sigma(1)),\ldots,\eta(\sigma(N)))$ for a permutation $\sigma$ of $(1,2,\ldots,N)$ belonging to the cyclic group $G$ of order $N$ of the rotations of the players. If, in addition, $p_{1}=p_{2}$, the permutation $\sigma$ can belong to the dihedral group $G$ of order $2N$ of the rotations and reflections of the players. They verified the lumpability condition, with the result that the size of the state space was reduced by a factor of nearly $N$ (or $2N$ if $p_{1}=p_{2}$) for large $N$. It should be noted that a sufficient condition for the lumpability condition in this setting is that, for every $\eta,\zeta\in\\{0,1\\}^{N}$, $P(\eta_{\sigma},\zeta_{\sigma})=P(\eta,\zeta)\quad\text{for all $\sigma\in G$}$ or for all $\sigma$ in a subset of $G$ that generates $G$. To fully justify this, the following lemma is useful. ###### Lemma 11 (Ethier and Lee [6]). Fix $N\geq 3$, let $G$ be a subgroup of the symmetric group $S_{N}$. Let $\bm{P}$ be the one-step transition matrix for a Markov chain in $\\{0,1\\}^{N}$ with a unique stationary distribution $\bm{\pi}$. Assume that $P(\eta_{\sigma},\zeta_{\sigma})=P(\eta,\zeta),\qquad\sigma\in G,\;\eta,\zeta\in\\{0,1\\}^{N}.$ (17) Then $\pi(\eta_{\sigma})=\pi(\eta)$ for all $\sigma\in G$ and $\eta\in\\{0,1\\}^{N}$. Let us say that $\eta\in\\{0,1\\}^{N}$ is equivalent to $\zeta\in\\{0,1\\}^{N}$ (written $\eta\sim\zeta$) if there exists $\sigma\in G$ such that $\zeta=\eta_{\sigma}$, and let us denote the equivalence class containing $\eta$ by $[\eta]$. Then, in addition, $\bm{P}$ induces a one-step transition matrix $\bar{\bm{P}}$ for a Markov chain in the quotient set (i.e., the set of equivalence classes) $\bar{\Sigma}$ defined by the formula $\bar{P}([\eta],[\zeta]):=\sum_{\zeta^{\prime}\in[\zeta]}P(\eta,\zeta^{\prime}),$ Furthermore, if $\bar{\bm{P}}$ has a unique stationary distribution $\bar{\bm{\pi}}$, then the unique stationary distribution $\bm{\pi}$ is given by $\pi(\eta)=\bar{\pi}([\eta])/|[\eta]|$, where $|[\eta]|$ denotes the cardinality of the equivalence class $[\eta]$. The lemma will apply to $\bm{P}_{A^{\prime}}$ and $\bm{P}_{B}$ (hence $\bm{P}_{C^{\prime}}$) if we can verify (17) for $G$ being the cyclic group of rotations or, if $p_{1}=p_{2}$, the dihedral group of rotations and reflections. The practical effect of this is that we can reduce the size of the state space (namely, $2^{N}$) to what we will call its effective size, which is simply the number of equivalence classes. For example, if $N=3$, there are eight states and four equivalence classes, namely $0=\\{000\\},\quad 1=\\{001,010,100\\},\quad 2=\\{011,101,110\\},\quad 3=\\{111\\}.$ Notice that we label equivalence classes by the number of 1s each element has. If $N=4$, there are 16 states and six equivalence classes, namely $\displaystyle 0$ $\displaystyle=\\{0000\\},$ $\displaystyle 1$ $\displaystyle=\\{0001,0010,0100,1000\\},$ $\displaystyle 2$ $\displaystyle=\\{0011,0110,1001,1100\\},$ $\displaystyle 2^{\prime}$ $\displaystyle=\\{0101,1010\\},$ $\displaystyle 3$ $\displaystyle=\\{0111,1011,1101,1110\\},$ $\displaystyle 4$ $\displaystyle=\\{1111\\}.$ In these two cases, it does not matter which of the two equivalence relations we use; the result is the same. The number of equivalence classes with $G$ being the group of cyclic permutations follows the sequence A000031 in the The On-Line Encyclopedia of Integer Sequences (Sloan [14]), described as the number of necklaces with $N$ beads of two colors when turning over is not allowed. There is an explicit formula in terms of Euler’s phi-function. If $p_{1}=p_{2}$, we can reverse the order of the players, and the number of equivalence classes with $G$ being the dihedral group follows the sequence A000029 in the OEIS, described as the number of necklaces with $N$ beads of two colors when turning over is allowed. Again there is an explicit formula. ###### Example 12. To illustrate our approach in a tractable case, we focus on the case $N=4$, as did Xie et al. [3]. Here $\\{0,1\\}^{N}$ has 16 states, ordered as the 4-bit binary representations of the number 0–15. First, $\bm{P}_{B}$ has the form as in Eq. (12) of Xie et al. [3]. For example, the diagonal entries of $4\bm{P}_{B}$ are $\displaystyle d_{0}$ $\displaystyle:=4q_{0},$ $\displaystyle d_{1}=d_{2}=d_{4}=d_{8}$ $\displaystyle:=p_{0}+q_{0}+q_{1}+q_{2},$ $\displaystyle d_{3}=d_{6}=d_{9}=d_{12}$ $\displaystyle:=p_{1}+p_{2}+q_{1}+q_{2},$ $\displaystyle d_{5}=d_{10}$ $\displaystyle:=2(p_{0}+q_{3}),$ $\displaystyle d_{7}=d_{11}=d_{13}=d_{14}$ $\displaystyle:=p_{1}+p_{2}+p_{3}+q_{3},$ $\displaystyle d_{15}$ $\displaystyle:=4p_{3},$ where $q_{m}:=1-p_{m}$ for $m=0,1,2,3$. For the equivalence relation above, there are six equivalence classes, namely $\\{0000\\}$, $\\{0001,0010,0100,1000\\}$, $\\{0011,0110,1001,1100\\}$, $\\{0101,1010\\}$, $\\{0111,\break 1011,1101,1110\\}$, and $\\{1111\\}$. Denoting the states by their decimal representations (0–15), the equivalence classes are $\\{0\\}$, $\\{1,2,4,8\\}$, $\\{3,6,9,$ $12\\}$, $\\{5,10\\}$, $\\{7,11,13,14\\}$, and $\\{15\\}$. It will be convenient to reorder the states temporarily. Within each equivalence class, we order elements so that each is a fixed rotation of the preceding one, that is, $\\{0000\\}$, $\\{1000,0100,0010,0001\\}$, $\\{1100,0110,0011,\break 1001\\}$, $\\{1010,0101\\}$, $\\{1110,0111,1011,1101\\}$, and $\\{1111\\}$, or $\\{0\\}$, $\\{8,4,2,1\\}$, $\\{12,6,3,9\\}$, $\\{10,5\\}$, $\\{14,7,11,13\\}$, and $\\{15\\}$. We now order states in this order: 0, 8, 4, 2, 1, 12, 6, 3, 9, 10, 5, 14, 7, 11, 13, 15, which leads to an alternative form for the transition matrix shown in Figure 1 $\bm{P}_{B}:=\frac{1}{4}\left(\begin{array}[]{c|cccc|cccc|cc|cccc|c}d_{0}&p_{0}&p_{0}&p_{0}&p_{0}&0&0&0&0&0&0&0&0&0&0&0\\\ \vskip 3.0pt plus 1.0pt minus 1.0pt\cr\hline\cr\vskip 3.0pt plus 1.0pt minus 1.0pt\cr q_{0}&d_{8}&0&0&0&p_{2}&0&0&p_{1}&p_{0}&0&0&0&0&0&0\\\ q_{0}&0&d_{4}&0&0&p_{1}&p_{2}&0&0&0&p_{0}&0&0&0&0&0\\\ q_{0}&0&0&d_{2}&0&0&p_{1}&p_{2}&0&p_{0}&0&0&0&0&0&0\\\ q_{0}&0&0&0&d_{1}&0&0&p_{1}&p_{2}&0&p_{0}&0&0&0&0&0\\\ \vskip 3.0pt plus 1.0pt minus 1.0pt\cr\hline\cr\vskip 3.0pt plus 1.0pt minus 1.0pt\cr 0&q_{2}&q_{1}&0&0&d_{12}&0&0&0&0&0&p_{2}&0&0&p_{1}&0\\\ 0&0&q_{2}&q_{1}&0&0&d_{6}&0&0&0&0&p_{1}&p_{2}&0&0&0\\\ 0&0&0&q_{2}&q_{1}&0&0&d_{3}&0&0&0&0&p_{1}&p_{2}&0&0\\\ 0&q_{1}&0&0&q_{2}&0&0&0&d_{9}&0&0&0&0&p_{1}&p_{2}&0\\\ \vskip 3.0pt plus 1.0pt minus 1.0pt\cr\hline\cr\vskip 3.0pt plus 1.0pt minus 1.0pt\cr 0&q_{0}&0&q_{0}&0&0&0&0&0&d_{10}&0&p_{3}&0&p_{3}&0&0\\\ 0&0&q_{0}&0&q_{0}&0&0&0&0&0&d_{5}&0&p_{3}&0&p_{3}&0\\\ \vskip 3.0pt plus 1.0pt minus 1.0pt\cr\hline\cr\vskip 3.0pt plus 1.0pt minus 1.0pt\cr 0&0&0&0&0&q_{2}&q_{1}&0&0&q_{3}&0&d_{14}&0&0&0&p_{3}\\\ 0&0&0&0&0&0&q_{2}&q_{1}&0&0&q_{3}&0&d_{7}&0&0&p_{3}\\\ 0&0&0&0&0&0&0&q_{2}&q_{1}&q_{3}&0&0&0&d_{11}&0&p_{3}\\\ 0&0&0&0&0&q_{1}&0&0&q_{2}&0&q_{3}&0&0&0&d_{13}&p_{3}\\\ \vskip 3.0pt plus 1.0pt minus 1.0pt\cr\hline\cr\vskip 3.0pt plus 1.0pt minus 1.0pt\cr 0&0&0&0&0&0&0&0&0&0&0&q_{3}&q_{3}&q_{3}&q_{3}&d_{15}\end{array}\right).$ Figure 1: The transition matrix $\bm{P}_{B}$ for the Markov chain describing game $B$, with states ordered first by equivalence class and then in the order $\\{0\\},\\{8,4,2,1\\},\\{12,6,3,9\\},\\{10,5\\},\\{14,7,11,13\\},\\{15\\}$. The lumpability condition requires that, within each block, row sums be equal. That this condition is met can be seen at a glance. Moreover, we can also see that the sufficient condition (17) holds as well. Because of how we ordered the states, this condition requires that each block be constant along each diagonal parallel to the main diagonal (assuming periodic boundary conditions). We conclude that $\bar{\bm{P}}_{B}=\frac{1}{4}\left(\begin{array}[]{cccccc}4q_{0}&4p_{0}&0&0&0&0\\\ q_{0}&p_{0}+q_{0}+q_{1}+q_{2}&p_{1}+p_{2}&p_{0}&0&0\\\ 0&q_{1}+q_{2}&p_{1}+p_{2}+q_{1}+q_{2}&0&p_{1}+p_{2}&0\\\ 0&2q_{0}&0&2(p_{0}+q_{3})&2p_{3}&0\\\ 0&0&q_{1}+q_{2}&q_{3}&p_{1}+p_{2}+p_{3}+q_{3}&p_{3}\\\ 0&0&0&0&4q_{3}&4p_{3}\end{array}\right).$ We turn next to game $A^{\prime}$. Again there are 16 states (namely, the 4-bit binary representations of the integers 0–15) and the transition matrix can be easily evaluated. To verify the lumpability condition we reorder the states and rewrite the matrix in block form as we did for $\bm{P}_{B}$. See Figure 2. $\bm{P}_{A^{\prime}}:=\frac{1}{8}\left(\begin{array}[]{c|cccc|cccc|cc|cccc|c}0&2&2&2&2&0&0&0&0&0&0&0&0&0&0&0\\\ \vskip 3.0pt plus 1.0pt minus 1.0pt\cr\hline\cr\vskip 3.0pt plus 1.0pt minus 1.0pt\cr 0&2&1&0&1&1&0&0&1&2&0&0&0&0&0&0\\\ 0&1&2&1&0&1&1&0&0&0&2&0&0&0&0&0\\\ 0&0&1&2&1&0&1&1&0&2&0&0&0&0&0&0\\\ 0&1&0&1&2&0&0&1&1&0&2&0&0&0&0&0\\\ \vskip 3.0pt plus 1.0pt minus 1.0pt\cr\hline\cr\vskip 3.0pt plus 1.0pt minus 1.0pt\cr 0&1&1&0&0&2&0&0&0&1&1&1&0&0&1&0\\\ 0&0&1&1&0&0&2&0&0&1&1&1&1&0&0&0\\\ 0&0&0&1&1&0&0&2&0&1&1&0&1&1&0&0\\\ 0&1&0&0&1&0&0&0&2&1&1&0&0&1&1&0\\\ \vskip 3.0pt plus 1.0pt minus 1.0pt\cr\hline\cr\vskip 3.0pt plus 1.0pt minus 1.0pt\cr 0&0&0&0&0&1&1&1&1&4&0&0&0&0&0&0\\\ 0&0&0&0&0&1&1&1&1&0&4&0&0&0&0&0\\\ \vskip 3.0pt plus 1.0pt minus 1.0pt\cr\hline\cr\vskip 3.0pt plus 1.0pt minus 1.0pt\cr 0&0&0&0&0&1&1&0&0&2&0&2&1&0&1&0\\\ 0&0&0&0&0&0&1&1&0&0&2&1&2&1&0&0\\\ 0&0&0&0&0&0&0&1&1&2&0&0&1&2&1&0\\\ 0&0&0&0&0&1&0&0&1&0&2&1&0&1&2&0\\\ \vskip 3.0pt plus 1.0pt minus 1.0pt\cr\hline\cr\vskip 3.0pt plus 1.0pt minus 1.0pt\cr 0&0&0&0&0&0&0&0&0&0&0&2&2&2&2&0\end{array}\right).$ Figure 2: The transition matrix $\bm{P}_{A^{\prime}}$ for the Markov chain describing game $A^{\prime}$, with states ordered first by equivalence class and then in the order $\\{0\\},\\{8,4,2,1\\},\\{12,6,3,9\\},\\{10,5\\},\\{14,7,11,13\\},\\{15\\}$. Again the condition is clearly met, and we have $\bar{\bm{P}}_{A^{\prime}}=\frac{1}{4}\left(\begin{array}[]{cccccc}0&4&0&0&0&0\\\ 0&2&1&1&0&0\\\ 0&1&1&1&1&0\\\ 0&0&2&2&0&0\\\ 0&0&1&1&2&0\\\ 0&0&0&0&4&0\\\ \end{array}\right).$ The lumpability condition (17) has been checked for $\bm{P}_{B}$ by Ethier and Lee [5]. For $\bm{P}_{A^{\prime}}$, we can verify (17) by observing that, if $(\sigma(1),\ldots,\sigma(N))=(2,3,\ldots,N,1)$, then, after some calculations, $P_{A^{\prime}}(\eta_{\sigma},\zeta_{\sigma})=P_{A^{\prime}}(\eta,\zeta)$. If $(\sigma(1),\break\ldots,\sigma(N))=(N,N-1,\ldots,2,1)$, then the same identity holds. ### 3.2 Means and variances We saw in Theorems 8 and 10 that the means and variances that appear in the SLLNs and CLTs of Sections 2.2–2.4 (namely, $\mu_{B}^{\circ}$, $\mu_{(\gamma,1-\gamma)^{\prime}}^{\circ}$, $\mu_{[r,s]^{\prime}}^{\circ}$, $(\sigma_{B}^{\circ})^{2}$, $(\sigma_{(\gamma,1-\gamma)^{\prime}}^{\circ})^{2}$, and $(\sigma_{[r,s]^{\prime}}^{\circ})^{2}$) are equal to the corresponding quantities defined in terms of the original transition matrices (namely, $\mu_{B}$, $\mu_{(\gamma,1-\gamma)^{\prime}}$, $\mu_{[r,s]^{\prime}}$, $\sigma_{B}^{2}$, $\sigma_{(\gamma,1-\gamma)^{\prime}}^{2}$, and $\sigma_{[r,s]^{\prime}}^{2}$). We claim that the corresponding quantities defined in terms of the reduced transition matrices (namely, $\bar{\mu}_{B}$, $\bar{\mu}_{(\gamma,1-\gamma)^{\prime}}$, $\bar{\mu}_{[r,s]^{\prime}}$, $\bar{\sigma}_{B}^{2}$, $\bar{\sigma}_{(\gamma,1-\gamma)^{\prime}}^{2}$, and $\bar{\sigma}_{[r,s]^{\prime}}^{2}$) are also equal. First, we define $\displaystyle\bar{\mu}_{B}$ $\displaystyle:=\bar{\bm{\pi}}_{B}\dot{\bar{\bm{P}}}_{B}\bm{1},$ $\displaystyle\bar{\mu}_{(\gamma,1-\gamma)^{\prime}}$ $\displaystyle:=(1-\gamma)\bar{\bm{\pi}}_{C^{\prime}}\dot{\bar{\bm{P}}}_{B}\bm{1},$ $\displaystyle\bar{\mu}_{[r,s]^{\prime}}$ $\displaystyle:=\frac{1}{r+s}\sum_{v=0}^{s-1}\bar{\bm{\pi}}\bar{\bm{P}}_{A^{\prime}}^{r}\bar{\bm{P}}_{B}^{v}\dot{\bar{\bm{P}}}_{B}\bm{1},$ $\displaystyle\bar{\sigma}_{B}^{2}$ $\displaystyle:=\bar{\bm{\pi}}_{B}\ddot{\bar{\bm{P}}}_{B}\bm{1}-(\bar{\bm{\pi}}_{B}\dot{\bar{\bm{P}}}_{B}\bm{1})^{2}+2\bar{\bm{\pi}}_{B}\dot{\bar{\bm{P}}}_{B}(\bar{\bm{Z}}_{B}-\bm{1}\bar{\bm{\pi}}_{B})\dot{\bar{\bm{P}}}_{B}\bm{1},$ $\displaystyle\bar{\sigma}_{(\gamma,1-\gamma)^{\prime}}^{2}$ $\displaystyle:=\bar{\bm{\pi}}_{C^{\prime}}\ddot{\bar{\bm{P}}}_{C^{\prime}}\bm{1}-(\bar{\bm{\pi}}_{C^{\prime}}\dot{\bar{\bm{P}}}_{C^{\prime}}\bm{1})^{2}+2\bar{\bm{\pi}}_{C^{\prime}}\dot{\bar{\bm{P}}}_{C^{\prime}}(\bar{\bm{Z}}_{C^{\prime}}-\bm{1}\bar{\bm{\pi}}_{C^{\prime}})\dot{\bar{\bm{P}}}_{C^{\prime}}\bm{1},$ $\displaystyle\bar{\sigma}_{[r,s]^{\prime}}^{2}$ $\displaystyle:=\frac{1}{r+s}\bigg{\\{}s-\sum_{v=0}^{s-1}(\bar{\bm{\pi}}\bar{\bm{P}}_{A^{\prime}}^{r}\bar{\bm{P}}_{B}^{v}\dot{\bar{\bm{P}}}_{B}\bm{1})^{2}$ $\displaystyle\qquad\qquad{}+2\bigg{[}\sum_{0\leq u<v\leq s-1}\bar{\bm{\pi}}\bar{\bm{P}}_{A^{\prime}}^{r}\bar{\bm{P}}_{B}^{u}\dot{\bar{\bm{P}}}_{B}(\bar{\bm{P}}_{B}^{v-u-1}-\bm{1}\bar{\bm{\pi}}\bar{\bm{P}}_{A^{\prime}}^{r}\bar{\bm{P}}_{B}^{v})\dot{\bar{\bm{P}}}_{B}\bm{1}$ $\displaystyle\qquad\qquad\qquad{}+\sum_{u=0}^{s-1}\sum_{v=0}^{s-1}\bar{\bm{\pi}}\bar{\bm{P}}_{A^{\prime}}^{r}\bar{\bm{P}}_{B}^{u}\dot{\bar{\bm{P}}}_{B}\bar{\bm{P}}_{B}^{s-u-1}(\bar{\bm{Z}}-\bm{1}\bar{\bm{\pi}})\bar{\bm{P}}_{A^{\prime}}^{r}\bar{\bm{P}}_{B}^{v}\dot{\bar{\bm{P}}}_{B}\bm{1}\bigg{]}\bigg{\\}}.$ ###### Theorem 13. $\mu_{B}=\bar{\mu}_{B},\qquad\mu_{(\gamma,1-\gamma)^{\prime}}=\bar{\mu}_{(\gamma,1-\gamma)^{\prime}},\qquad\mu_{[r,s]^{\prime}}=\bar{\mu}_{[r,s]^{\prime}}$ and $\sigma_{B}^{2}=\bar{\sigma}_{B}^{2},\qquad\sigma_{(\gamma,1-\gamma)^{\prime}}^{2}=\bar{\sigma}_{(\gamma,1-\gamma)^{\prime}}^{2},\qquad\sigma_{[r,s]^{\prime}}^{2}=\bar{\sigma}_{[r,s]^{\prime}}^{2}.$ ###### Proof. A result of Ethier and Lee [6] implies that, if $\bm{Q}$ is a $G$-invariant square (not necessarily stochastic) matrix (i.e., $Q(\eta_{\sigma},\zeta_{\sigma})=Q(\eta,\zeta)$ for all $\eta,\zeta\in\\{0,1\\}^{N}$ and all $\sigma\in G$), then $\bm{\pi}\bm{Q}\bm{1}=\bar{\bm{\pi}}\bar{\bm{Q}}\bm{1}.$ Repeated application of this identity gives the desired conclusions. ∎ The formulas for the means with bars are computable for $3\leq N\leq 18$, at least. We give partial results for Toral’s [2] choice of the parameter vector $(p_{0},p_{1},p_{2},p_{3})$ in Table 1. The formulas for the variances with bars are perhaps computable for $3\leq N\leq 12$, but we do not include them here. ### 3.3 Computer graphics Ethier and Lee [4] sketched, for games $A$, $B$, and $C:=\frac{1}{2}A+\frac{1}{2}B$, the Parrondo and anti-Parrondo regions when $3\leq N\leq 9$. They assumed that $p_{1}=p_{2}$ and relabeled $p_{3}$ as $p_{2}$. In other words, their parameter vector was of the form $(p_{0},p_{1},p_{1},p_{2})$. (The reason for this simplification is that a three-dimensional figure is easier to visualize than a four-dimensional figure.) The figures for games $A^{\prime}$, $B$, and $C^{\prime}:=\frac{1}{2}A^{\prime}+\frac{1}{2}B$ are distinctively different from those for games $A$, $B$, and $C$. In both cases, the general shape of the Parrondo and anti-Parrondo regions does not change much, once $N\geq 5$. We illustrate in the case $r=1$ and $s=2$ in Figure 4. Table 1: Mean profit per turn at equilibrium in the games of Toral [2] and Xie et al. [3], assuming $(p_{0},p_{1},p_{2},p_{3})=(1,4/25,4/25,7/10)$. Results are given to six significant digits. The entries corresponding to $N=\infty$ are limits as $N\to\infty$ (see Theorem 14). mean profit per turn, Toral’s games --- $N$0 | $B$ | $\frac{1}{2}(A+B)$ | $AB$ | $ABB$ | $AAB$ | $AABB$ 030 | $-0.0909091\hphantom{0}\hphantom{0}$ | $-0.0183774\hphantom{0}$ | $-0.00695879$ | $-0.0274821\hphantom{0}$ | $-$0.000672486 | $-0.0148718$0 060 | $-0.0189247\hphantom{0}\hphantom{0}$ | $-$0.00463310 | $-$0.00497503 | $-$0.00590528 | $-$0.003250990 | $-$0.00498178 090 | $-0.00189233\hphantom{0}$ | $-$0.00479036 | $-$0.00493507 | $-$0.00598135 | $-$0.003278020 | $-$0.00493728 120 | $-0.000676916$ | $-$0.00479089 | $-$0.00490464 | $-$0.00586697 | $-$0.003288000 | $-$0.00490531 150 | $-0.000586184$ | $-$0.00479089 | $-$0.00488431 | $-$0.00579891 | $-$0.003292490 | $-$0.00488449 180 | $-0.000579652$ | $-$0.00479089 | $-$0.00486999 | $-$0.00575438 | $-$0.003294830 | $-$0.00487001 $\infty$0 | | $-$0.00479089 | $-$0.00479089 | $-$0.00554084 | $-$0.003298530 | $-$0.00479089 mean profit per turn, Xie at al.’s games $N$0 | $B$ | $\frac{1}{2}(A^{\prime}+B)$ | $A^{\prime}B$ | $A^{\prime}BB$ | $A^{\prime}A^{\prime}B$ | $A^{\prime}A^{\prime}BB$ 030 | $-0.0909091\hphantom{0}\hphantom{0}$ | $-0.0766158\hphantom{0}$ | $-0.105479\hphantom{0}\hphantom{0}$ | $-0.102038\hphantom{0}\hphantom{0}$ | $-0.0724638\hphantom{0}$ | $-0.0773252$0 060 | $-0.0189247\hphantom{0}\hphantom{0}$ | $-$0.00671656 | $-$0.00640351 | $-$0.00955597 | $-$0.00363075 | $-$0.00745377 090 | $-0.00189233\hphantom{0}$ | $-$0.00678314 | $-$0.00676079 | $-$0.00887095 | $-$0.00402382 | $-$0.00705972 120 | $-0.000676916$ | $-$0.00678336 | $-$0.00682799 | $-$0.00860524 | $-$0.00419181 | $-$0.00695667 150 | $-0.000586184$ | $-$0.00678336 | $-$0.00684381 | $-$0.00845891 | $-$0.00427852 | $-$0.00691300 180 | $-0.000579652$ | $-$0.00678336 | $-$0.00684607 | $-$0.00836539 | $-$0.00433011 | $-$0.00688859 $\infty$0 | | $-$0.00678336 | $-$0.00678336 | $-$0.00792947 | $-$0.00451510 | $-$0.00678336 ## 4 Convergence of means Computations suggest that $\mu_{(\gamma,1-\gamma)^{\prime}}^{N}$ and $\mu_{[r,s]^{\prime}}^{N}$ converge as $N\to\infty$, regardless of the parameters $p_{0},p_{1},p_{2},p_{3}\in(0,1)$. We cannot prove this, but we can give sufficient conditions on the parameters for this convergence to hold. These are $\displaystyle\max\bigg{[}\bigg{|}\frac{\gamma}{2}+(1-\gamma)(p_{0}-p_{1})\bigg{|},\bigg{|}\frac{\gamma}{2}+(1-\gamma)(p_{2}-p_{3})\bigg{|}\bigg{]}$ $\displaystyle\qquad+\max\bigg{[}\bigg{|}\frac{\gamma}{2}+(1-\gamma)(p_{0}-p_{2})\bigg{|},\bigg{|}\frac{\gamma}{2}+(1-\gamma)(p_{1}-p_{3})\bigg{|}\bigg{]}<1.$ (18) ###### Theorem 14. Fix integers $r,s\geq 1$ and put $\gamma:=r/(r+s)$. If (4) holds, then $\lim_{N\to\infty}\mu_{(\gamma,1-\gamma)^{\prime}}^{N}$ exists, and $\lim_{N\to\infty}\mu_{[r,s]^{\prime}}^{N}=\lim_{N\to\infty}\mu_{(\gamma,1-\gamma)^{\prime}}^{N}$. The volume of the subset of the parameter space $[0,1]^{4}$ for which (4) holds with $\gamma=1/2$ is, by Mathematica, 5/6. If we assume that $p_{1}=p_{2}$, then the volume of the subset of the parameter space $[0,1]^{3}$ for which (4) holds is, by Mathematica, 3/4. In fact, we plot the three-dimensional volume as a function of $\gamma$ in Figure 3. Figure 3: Assuming $p_{1}=p_{2}$, the three-dimensional volume of the subset of the parameter space for which (4) holds is plotted as a function of $\gamma$. Notice that the volume is $3/4$ if and only if $\gamma\geq 1/3$. $N=3$ $N=4$ $N=5$ $N=6$ Figure 4: For $3\leq N\leq 6$, the blue surface is the surface $\mu_{B}=0$, and the red surface is the surface $\mu_{[1,2]^{\prime}}=0$, in the $(p_{0},p_{2},p_{1})$ unit cube. The Parrondo region is the region on or below the blue surface and above the red surface, while the anti-Parrondo region is the region on or above the blue surface and below the red surface. Here $(p_{0},p_{1},p_{1},p_{3})$ is relabeled as $(p_{0},p_{1},p_{1},p_{2})$. ## References * [1] A. Ajdari and J. Prost, Drift induced by a spatially periodic potential of low symmetry: Pulsed dielectrophoresis, C. R. Acad. Sci., Ser. II 315 (1992) 1635–1639. * [2] R. Toral, Cooperative Parrondo games, Fluct. Noise Lett. 1 (2001) L7–L12. * [3] N.-G. Xie, Y. Chen, Y. Ye, G. Xu, L.-G. Wang and C. Wang, Theoretical analysis and numerical simulation of Parrondo’s paradox game in space, Chaos Solitons Fractals 44 (2011) 401–414. * [4] S. N. Ethier and J. Lee, Parrondo games with spatial dependence, III, Fluct. Noise Lett. 14 (2015) 1550039. * [5] S. N. Ethier and J. Lee, Parrondo games with spatial dependence, Fluct. Noise Lett. 11 (2012) 1250004. * [6] S. N. Ethier and J. Lee, Parrondo games with spatial dependence, II, Fluct. Noise Lett. 11 (2012) 1250030. * [7] S. N. Ethier and J. Lee, Parrondo games with spatial dependence and a related spin system, Markov Process. Relat. Fields 19 (2013) 163–194. * [8] S. N. Ethier and J. Lee, Parrondo games with spatial dependence and a related spin system, II, Markov Process. Relat. Fields 19 (2013) 667–692. * [9] Y.-F. Li, S.-Q. Ye, K.-X. Zheng, N.-G. Xie, Y. Ye and L. Wang, A new theoretical analysis approach for a multi-agent spatial Parrondo’s game, Phys. A 407 (2014) 369–379. * [10] S. N. Ethier and J. Lee, Limit theorems for Parrondo’s paradox, Electron. J. Probab. 14 (2009) 1827–1862. * [11] Z. Mihailović and M. Rajković, One dimensional asynchronous cooperative Parrondo’s games, Fluct. Noise Lett. 3 (2003) L389–L398. * [12] S. N. Ethier and J. Lee, Parrondo games with two-dimensional spatial dependence, Fluct. Noise Lett. 16 (2017) (1750005). * [13] J. G. Kemeny and J. L. Snell, Finite Markov Chains, 2nd Ed. (Springer-Verlag, New York, 1976). * [14] N. J. A. Sloane, The On-Line Encyclopedia of Integer Sequences, http://oeis.org/, 2019.
8k
arxiv_papers
2101.01180
# Majorana corner states in square and Kagome quantum spin liquids Haoran Wang Department of Physics and Astronomy, University of Manchester, Manchester M13 9PL, UK Alessandro Principi [email protected] Department of Physics and Astronomy, University of Manchester, Manchester M13 9PL, UK ###### Abstract Quantum spin liquids hosting Majorana excitations have recently experienced renewed interest for potential applications to topological quantum computation. Performing logical operations with reduced poisoning requires to localize such quasiparticles at specific point of the system, with energies that are well defined and inside the bulk energy gap. These are two defining features of second order topological insulators (SOTIs). Here, we show two spin models that support quantum spin liquid phases characterised by Majorana excitations and that behave as SOTIs, one of which is analytically solvable thanks to a theorem by Lieb. We show that, depending on the values of spin couplings, it is possible to localize either fermions or Majorana particles at their corners. Introduction—Quantum spin liquids (QSLs) are intriguing states of matter in which spins never freeze due to their high degree of entanglement Anderson (1973); Zhou _et al._ (2017); Savary and Balents (2016); Wen (2004). Recently, they have experienced rekindled interest because of two main factors. On the one hand, the development of exactly-solvable spin-lattice models, for example Kitaev’s Kitaev (2006), which host QSL phases whose excitations are neither fermions nor bosons but Majorana particles. On the other hand, the discovery of new materials compatible with such models Banerjee _et al._ (2017). Majorana particles are one of the cornerstones of research in topological quantum computation, since they can be used to design quantum gates that are resilient to external noise Kitaev (2003); Sarma _et al._ (2015); Hoffman _et al._ (2016); Lian _et al._ (2018). In order to use them to perform logical operations, it is however crucial to find ways to localize them at specific points of the system, with well-defined energies inside an energy gap. The former property is missing in the original Kitaev QSL model. There, 1D Majorana channels can be formed at the edges, in the presence of a magnetic field Kitaev (2006). (In principle, it is also possible to localize Majorana particles at vortices of the $\mathbb{Z}_{2}$ gauge field Kitaev (2006); Knolle (2016). However, generating and controlling such nanoscale objects in experiments would be extremely challenging.) In this paper we show two spin models, one of which analytically solvable, which support topologically- protected Majorana corner states with energies in the middle of the bulk band gap. Mid-gap corner states emerge in two-dimensional (2D) “second-order” topological insulators (SOTIs) Benalcazar _et al._ (2017a, b); Schindler _et al._ (2018a); Langbehn _et al._ (2017); Geier _et al._ (2018); Ezawa (2018a); Song _et al._ (2017); Ezawa (2018b, c). $d$-dimensional SOTIs are insulating both in the bulk and at the surfaces, but exhibit gapless $d-2$-states protected by a variety of crystalline symmetries Benalcazar _et al._ (2017b) (among others, mirror reflection, twofold rotation, or inversion symmetry). SOTIs have been realized in various experiments, most notably in artificial settings such as mechanical Serra-Garcia _et al._ (2018), electrical Imhof _et al._ (2018), microwave Peterson _et al._ (2018), and photonic El Hassan _et al._ (2019) devices, but they have also been shown to occur in Nature Schindler _et al._ (2018b). A similar phenomenology has been predicted to occur in non-Hermitian systems Luo and Zhang (2019), topological superconductors Liu _et al._ (2018); Zhu (2018); Laubscher _et al._ (2019); Wang _et al._ (2018); Kheirkhah _et al._ (2020); Yan (2019) and QSLs Dwivedi _et al._ (2018). In the latter two, corner states can be Majorana particles. However, to the best of our knowledge, no analytically solvable QSL has been shown to exhibit Majorana corner states 111The model of Ref. Dwivedi _et al._ (2018) can be solved exactly by mapping spins into Majorana particles which are free to propagate on top a quenched $\mathbb{Z}_{2}$ gauge potential. However, the ground-state flux configuration is not analytically known and must be found with numerical techniques.. Here we study two frustrated spin-$3/2$ systems which exhibit QSL phases and corner states at low temperature. The models we study are similar to that of Ref. Dwivedi _et al._ (2018) but are defined on Kagome Chua _et al._ (2011) and square Yao _et al._ (2009) lattices, rather than on the Shastry- Sutherland one. Following Kitaev’s construction Kitaev (2006), we fractionalize the spin degrees of freedom into two itinerant Majorana particles and a quesched $\mathbb{Z}_{2}$ gauge potential. Notably, the ground state of the square lattice we study is analytically known Lieb (1994). We show that both models can support stable fermionic or Majorana corner states, depending on the values of the spin couplings. The analytical solvability of the square-lattice model lends credibility to the possibility of finding Majorana corner states in QSLs. The model—We consider two lattices, square and Kagome, at whose sites are located spin-$3/2$ variables. The Hamiltonian of such systems can be written in terms of $4\times 4$ matrices which operate on the four spin polarizations at each site. The basis of $4\times 4$ matrices can be chosen to be composed by: the identity, five Gamma matrices ${\hat{\Gamma}}^{a}$ ($a=1,\ldots,5$) which can be represented as symmetric bilinear combinations of the spin-$3/2$ operators ${\hat{S}}^{\alpha}$ ($\alpha=x,y,z$) as Chua _et al._ (2011); Yao _et al._ (2009) $\displaystyle{\hat{\Gamma}}^{1}=\frac{\\{{\hat{S}}^{y},{\hat{S}}^{z}\\}}{\sqrt{3}},~{}{\hat{\Gamma}}^{2}=\frac{\\{{\hat{S}}^{z},{\hat{S}}^{x}\\}}{\sqrt{3}},~{}{\hat{\Gamma}}^{3}=\frac{\\{{\hat{S}}^{x},{\hat{S}}^{y}\\}}{\sqrt{3}},$ $\displaystyle{\hat{\Gamma}}^{4}=\frac{1}{\sqrt{3}}\big{[}({\hat{S}}^{x})^{2}-({\hat{S}}^{y})^{2}\big{]},~{}{\hat{\Gamma}}^{5}=({\hat{S}}^{z})^{2}-\frac{5}{4},$ (1) and the ten bilinears ${\hat{\Gamma}}^{ab}=[{\hat{\Gamma}}^{a},{\hat{\Gamma}}^{b}]/(2i)$. Here, $[\cdot,\cdot]$ and $\\{\cdot,\cdot\\}$ are the matrix commutator and anticommutator, respectively. In passing we note that the chosen Gamma matrices satisfy the Clifford algebra $\\{{\hat{\Gamma}}^{a},{\hat{\Gamma}}^{b}\\}=2\delta^{ab}$. We now define the Gamma-matrix model Chua _et al._ (2011); Yao _et al._ (2009) $\displaystyle{\hat{\cal H}}$ $\displaystyle=$ $\displaystyle J_{1}\sum_{\langle i,j\rangle\in{\cal P}_{1}}{\hat{\Gamma}}_{i}^{1}{\hat{\Gamma}}_{j}^{2}+J_{2}\sum_{\langle i,j\rangle\in{\cal P}_{2}}{\hat{\Gamma}}_{i}^{3}{\hat{\Gamma}}_{j}^{4}$ $\displaystyle+$ $\displaystyle J_{1}^{\prime}\sum_{\langle i,j\rangle\in{\cal P}_{1}}{\hat{\Gamma}}_{i}^{15}{\hat{\Gamma}}_{j}^{25}+J_{2}^{\prime}\sum_{\langle i,j\rangle\in{\cal P}_{2}}{\hat{\Gamma}}_{i}^{35}{\hat{\Gamma}}_{j}^{45}+J_{5}\sum_{i}{\hat{\Gamma}}_{i}^{5},$ where $i$ and $j$ label the lattice sites, while ${\cal P}_{1}$ and ${\cal P}_{2}$ are the collections of plaquettes of types 1 and 2, respectively (see Fig. 1 for the definition). In Eq. (Majorana corner states in square and Kagome quantum spin liquids), the coupling between spins depends on the type of plaquette to which the link $\langle i,j\rangle$ belongs (links are taken in the counterclockwise direction in plaquettes 1 and 2). For the Kagome lattice, plaquettes 1 and 2 coincide with upward and downwards triangles, respectively. Conversely, the plaquettes of the square lattice are taken to alternate between type 1 and 2 along every other diagonal, while all others are of type 3. In the Kagome lattice, hexagons are type-3 plaquettes. Type-3 plaquettes do not appear explicitly in the Hamiltonian (Majorana corner states in square and Kagome quantum spin liquids) to avoid double counting the bonds \begin{overpic}[width=84.55576pt]{fig1a} \put(0.0,-10.0){(a)} \end{overpic} | \begin{overpic}[width=127.9171pt]{fig1b} \put(0.0,-10.0){(b)} \end{overpic} ---|--- \begin{overpic}[width=52.03558pt]{fig1c} \put(0.0,-10.0){(c)} \end{overpic} | \begin{overpic}[width=52.03558pt]{fig1d} \put(0.0,-10.0){(d)} \end{overpic} | \begin{overpic}[width=52.03558pt]{fig1e} \put(0.0,-10.0){(e)} \end{overpic} | \begin{overpic}[width=52.03558pt]{fig1f} \put(0.0,-10.0){(f)} \end{overpic} ---|---|---|--- Figure 1: A pictorial representation of the two models studied in this paper. Panel (a) The square model. Panel (b) The Kagome model. In both cases we identify three types of plaquettes and two types of bonds. Solid (dashed) arrows correspond to couplings of the type ${\hat{\Gamma}}_{i}^{1}{\hat{\Gamma}}_{j}^{2}$ and ${\hat{\Gamma}}_{i}^{15}{\hat{\Gamma}}_{j}^{25}$ (${\hat{\Gamma}}_{i}^{3}{\hat{\Gamma}}_{j}^{4}$ and ${\hat{\Gamma}}_{i}^{35}{\hat{\Gamma}}_{j}^{45}$). The direction of the arrow is from site $i$ to $j$. The shaded region represents the unit cell used in square-lattice calculations (atoms in it are labeled with $1,2,3$ and $4$). Panels (c)-(f) The four distinct flux patterns in the Kagome unit cell. Panels (c) and (e) require doubling of the unit cell. It is possible to define a set of flux operators ${\hat{W}}_{p}$, one per plaquette $p$, which commute with the Hamiltonian and among themselves Chua _et al._ (2011); Yao _et al._ (2009). These are ${\hat{W}}_{p}=\prod_{i\in p}i{\hat{\Gamma}}^{12}_{i}$, if $p$ is of type 1, ${\hat{W}}_{p}=\prod_{i\in p}i{\hat{\Gamma}}^{34}_{i}$, if $p$ is of type 2, and ${\hat{W}}_{p}=\prod_{\langle i,j\rangle\in p}i{\hat{\Gamma}}^{23}_{i}{\hat{\Gamma}}^{14}_{j}$, if $p$ is of type 3. All products run over links taken in the counterclockwise direction around a given plaquette. In the type-3 flux operator, the first link is shared with a plaquette of type 1. The presence of such a large number of constants of motion is at the root of the solvability of the models. We now introduce six Majorana operators Chua _et al._ (2011); Yao _et al._ (2009) ${\hat{\xi}}_{i}^{1},{\hat{\xi}}_{i}^{2},{\hat{\xi}}_{i}^{3},{\hat{\xi}}_{i}^{4},{\hat{c}}_{i},$ and ${\hat{d}}_{i}$ at each site $i$, which satisfy the anticommutation relations $\\{{\hat{\xi}}_{i}^{a},{\hat{\xi}}_{j}^{b}\\}=2\delta_{ij}\delta^{ab}$, $\\{{\hat{c}}_{i},{\hat{c}}_{j}\\}=2\delta_{ij}$, $\\{{\hat{d}}_{i},{\hat{d}}_{j}\\}=2\delta_{ij}$, and $\\{{\hat{c}}_{i},{\hat{\xi}}_{j}^{a}\\}=\\{{\hat{d}}_{i},{\hat{\xi}}_{j}^{a}\\}=0$. The Gamma matrices are then expressed as ${\hat{\Gamma}}_{i}^{a}=i{\hat{\xi}}_{i}^{a}{\hat{c}}_{i}$, ${\hat{\Gamma}}_{i}^{a5}=i{\hat{\xi}}_{i}^{a}{\hat{d}}_{i}$ and ${\hat{\Gamma}}_{i}^{5}=i{\hat{c}}_{i}{\hat{d}}_{i}$, where $a=1,2,3,4$. This representation enlarges the Hilbert space, introducing non-physical states that must be projected out at the end of the calculation Chua _et al._ (2011); Yao _et al._ (2009); Kitaev (2006). To define the projection operator, we note that ${\hat{D}}_{i}=-{\hat{\Gamma}}_{i}^{1}{\hat{\Gamma}}_{i}^{2}{\hat{\Gamma}}_{i}^{3}{\hat{\Gamma}}_{i}^{4}{\hat{\Gamma}}_{i}^{5}=1$. In the Majorana representation, however, the eigenvalues of ${\hat{D}}_{i}=-i{\hat{\xi}}_{i}^{1}{\hat{\xi}}_{i}^{2}{\hat{\xi}}_{i}^{3}{\hat{\xi}}_{i}^{4}{\hat{c}}_{i}{\hat{d}}_{i}$ are $\pm 1$. For any physical state $|\Psi\rangle_{\rm phys}$, it must then be ${\hat{D}}_{i}|\Psi\rangle_{\rm phys}=|\Psi\rangle_{\rm phys}$, and therefore one can define the projection operator onto the physical Hilbert subspace as Chua _et al._ (2011); Yao _et al._ (2009) ${\hat{P}}=\prod_{i}(1+{\hat{D}}_{i})/2$. In the Majorana representation, Eq. (Majorana corner states in square and Kagome quantum spin liquids) becomes ${\hat{\cal H}_{\rm M}}=i\sum_{\langle i,j\rangle\in{\cal P}_{\alpha}}[J_{\alpha}{\hat{u}}_{ij}^{\alpha}{\hat{c}}_{i}{\hat{c}}_{j}+J_{\alpha}^{\prime}{\hat{u}}_{ij}^{\alpha}{\hat{d}}_{i}{\hat{d}}_{j}]+iJ_{5}\sum_{i}{\hat{c}}_{i}{\hat{d}}_{i},$ (3) where $\alpha=1,2$, ${\hat{u}}_{ij}^{1}=-i{\hat{\xi}}_{i}^{1}{\hat{\xi}}_{j}^{2}$, and ${\hat{u}}_{ij}^{2}=-i{\hat{\xi}}_{i}^{3}{\hat{\xi}}_{j}^{4}$. It can be shown Chua _et al._ (2011); Yao _et al._ (2009); Kitaev (2006) that all ${\hat{u}}_{ij}^{\alpha}$ commute with the Hamiltonian (3). Consequently, the full Hilbert space can be divided into sectors, each obtained by replacing ${\hat{u}}_{ij}^{\alpha}$ with the eigenvalues $u_{ij}^{\alpha}=\pm 1$. In each sector, Eq. (3) describes free Majorana particles (${\hat{c}}$ and ${\hat{d}}$) propagating on top of a quenched $\mathbb{Z}_{2}$ gauge potential $u_{ij}^{\alpha}$. The eigenvalues of flux operators are $W_{p}=\prod_{\langle i,j\rangle\in p}iu_{ij}^{\alpha}\equiv e^{i\phi_{p}}$. Here, $\phi_{p}$ is the flux through a given plaquette. For Kagome lattices, $\phi_{p}=\pm\pi/2$, if $p$ is of type 1 or 2, while $\phi_{p}=0$ or $\pi$ if $p$ is of type 3. Because of the antisymmetry of the Hamiltonian, a total of four distinct flux patterns exist for any unit cell [see Fig. 1(c)-(f)]. The total flux through a unit cell is either $0$ or $\pi$, and in the $\pi$ case doubling the unit cell is required. In the square lattice, the flux through any plaquette can only be either $\phi_{p}=0$ or $\pi$. Not all sectors describe different physical systems. In fact, the variables commuting with the original spin Hamiltonian are the fluxes ${\hat{W}}_{p}$ and not the gauge potential ${\hat{u}}_{ij}^{\alpha}$. By performing the gauge transformation ${\hat{c}}_{i}\to\Lambda_{i}{\hat{c}}_{i}$, ${\hat{d}}_{i}\to\Lambda_{i}{\hat{d}}_{i}$ and $u_{ij}\to\Lambda_{i}u_{ij}^{\alpha}\Lambda_{j}$, with $\Lambda_{i}=\pm 1$, both Hamiltonian and fluxes remain invariant. Hence, $2^{N}$ configurations of $u_{ij}^{\alpha}$ describe the same physical state Chua _et al._ (2011); Yao _et al._ (2009); Kitaev (2006). One can choose to work with any configuration, depending on convenience: upon projection with ${\hat{P}}$, the physical state becomes a superposition of states of all equivalent Hilbert space sectors. We now set $J_{5}=0$ and analyze the two lattices separately. Square lattice—We rewrite the Hamiltonian (3) as $\displaystyle{\hat{\cal H}_{\rm M}}$ $\displaystyle=$ $\displaystyle i\sum_{\ell,m}(-)^{\ell+m}\big{[}u_{\ell,m}^{x}({\tilde{J}}_{m}{\hat{c}}_{\ell,m}{\hat{c}}_{\ell,m+1}+{\tilde{J}}_{m}^{\prime}{\hat{d}}_{\ell,m}{\hat{d}}_{\ell,m+1})$ (4) $\displaystyle-$ $\displaystyle u_{\ell,m}^{y}({\tilde{J}}_{\ell}{\hat{c}}_{\ell,m}{\hat{c}}_{\ell+1,m}+{\tilde{J}}_{\ell}^{\prime}{\hat{d}}_{\ell,m}{\hat{d}}_{\ell+1,m})\big{]},$ where $\ell$ and $m$ denote the row and column in the lattice of Fig. 1(a), respectively, and $2{\tilde{J}}_{\ell}=(J_{1}+J_{2})+(-)^{\ell}(J_{1}-J_{2})$ (${\tilde{J}}_{\ell}^{\prime}$ is analogously defined, with $J_{1}^{\prime}$ and $J_{2}^{\prime}$ in lieu of $J_{1}$ and $J_{2}$). In Eq. (4), $u_{\ell,m}^{x}$ [$u_{\ell,m}^{y}$] is the value of the $\mathbb{Z}_{2}$ gauge field between sites $i=(\ell,m)$ and $j=(\ell+1,m)$ [$i=(\ell,m)$ and $j=(\ell,m+1)$] along the $x$ [$y$] direction. Thanks to Lieb’s theorem Lieb (1994), the ground state of the square lattice with equal hopping amplitudes is known to contain one flux quantum per plaquette ($\phi_{p}=\pi$) 222Strictly speaking, Lieb’s theorem holds for periodic systems. Following Kitaev Kitaev (2006), we will assume that the ground state of a large open lattice coincides with that of the associated periodic system.. When the difference between hopping amplitudes is much smaller than the two-flux excitation energy, we expect the ground state configuration to remain unchanged. We observe that $-{\hat{\cal H}_{\rm M}}$ has the same flux pattern as ${\hat{\cal H}_{\rm M}}$, since the latter is defined modulo $2\pi$. Therefore, $-{\hat{\cal H}_{\rm M}}={\hat{\cal G}}{\hat{\cal H}_{\rm M}}{\hat{\cal G}}^{-1}$, where ${\hat{\cal G}}$ is a gauge transformation that inverts the signs of all $u_{\ell,m}^{x(y)}$. This implies that the eigenvalues of ${\hat{\cal H}_{\rm M}}$ must be symmetric about zero, since if the state $|\psi\rangle$ has energy $E$, then the state ${\hat{\cal G}}|\psi\rangle$ has energy $-E$. We also note that, if ${\hat{\cal R}}$ is a $90^{\circ}$ rotation, there exist a gauge operation ${\hat{\cal G}}$ such that ${\hat{\cal R}}{\hat{\cal H}_{\rm M}}{\hat{\cal R}}^{-1}={\hat{\cal G}}^{-1}{\hat{\cal H}_{\rm M}}{\hat{\cal G}}$. Therefore ${\hat{\cal G}}{\hat{\cal R}}$ is a symmetry of the system. It can be easily verified that $({\hat{\cal G}}{\hat{\cal R}})^{4}=1$, and therefore the eigenstates of (4) can be one, two or four-fold degenerate. To progress further, we introduce the fermion operator ${\hat{f}}_{\ell,m}$, such that ${\hat{c}}_{\ell,m}=i^{\ell+m}{\hat{f}}_{\ell,m}^{\dagger}+(-i)^{\ell+m}{\hat{f}}_{\ell,m}$ and ${\hat{d}}_{\ell,m}=i^{\ell+m+1}{\hat{f}}_{\ell,m}^{\dagger}+(-i)^{\ell+m+1}{\hat{f}}_{\ell,m}$. Plugging this expressions into Eq. (4) we find $\displaystyle{\hat{\cal H}_{\rm M}}=\sum_{\ell,m}\big{\\{}u_{\ell,m}^{x}\big{[}(-)^{\ell+m}t_{m}{\hat{f}}_{\ell,m}^{\dagger}{\hat{f}}_{\ell,m+1}-\Delta_{m}{\hat{f}}_{\ell,m}^{\dagger}{\hat{f}}_{\ell,m+1}^{\dagger}\big{]}$ $\displaystyle- u_{\ell,m}^{y}\big{[}(-)^{\ell+m}t_{\ell}{\hat{f}}_{\ell,m}^{\dagger}{\hat{f}}_{\ell+1,m}-\Delta_{\ell}{\hat{f}}_{\ell,m}^{\dagger}{\hat{f}}_{\ell+1,m}^{\dagger}\big{]}\big{\\}}+{\rm h.c.},$ (5) where $t_{\ell}={\tilde{J}}_{\ell}+{\tilde{J}}_{\ell}^{\prime}$ and $\Delta_{\ell}={\tilde{J}}_{\ell}-{\tilde{J}}_{\ell}^{\prime}$. The Hamiltonian (Majorana corner states in square and Kagome quantum spin liquids) is in form identical to that of spinless electrons paired by p-wave superconductivity. Here, however, the pairing can have a vortex structure commensurate to the lattice (see more below). From now on, we choose the gauge-potential configuration $u_{\ell,m}^{x}=(-1)^{\ell+m}$ and $u_{\ell,m}^{y}=+1$, that corresponds to having a $\pi$-flux in each plaquette. \begin{overpic}[width=130.08731pt]{fig2a} \put(0.0,-10.0){(a)} \end{overpic} | \begin{overpic}[width=54.2025pt]{fig2b} \put(0.0,-10.0){(b)} \end{overpic} \begin{overpic}[width=54.2025pt]{fig2c} \put(0.0,-10.0){(c)} \end{overpic} \begin{overpic}[width=54.2025pt]{fig2d} \end{overpic} ---|--- Figure 2: Panel (a) The eigenvalues of Hamiltonian (6) for both $J_{2}=J_{1}$ (inner bands) and $J_{2}>J_{1}$ (outer bands). Panels (b) and (c) The two corner states of Majorana particles obtained by numerically diagonalizing the $c$-part of Hamiltonian (4) for $J_{1}=0.3$ and $J_{2}=1$. Due to the structure of Eq. (4), $d$-particles would exhibit the same states for $J_{2}^{\prime}>J_{1}^{\prime}$. Up until now, we have left the couplings $J_{1},J_{2},J_{1}^{\prime}$ and $J_{2}^{\prime}$ unspecified. Now we study two interesting situations. In the first case we set $J_{1}^{\prime}=J_{1}$ and $J_{2}^{\prime}=J_{2}$. In turn, $t_{\ell}=(J_{1}+J_{2})+(-)^{\ell}(J_{1}-J_{2})$ ($t_{m}$ is analogously defined), while $\Delta_{\ell}=\Delta_{m}=0$. The resulting Hamiltonian is in form identical to that of a fermionic SOTI Benalcazar _et al._ (2017a), and therefore exhibits a phase transition at $J_{1}=J_{2}$. For a system with periodic boundary conditions we can rewrite ${\hat{\cal H}}_{\rm M}=\sum_{{\bm{k}},\alpha,\beta}{\hat{F}}_{{\bm{k}},\alpha}^{\dagger}H^{(0)}_{{\bm{k}},\alpha\beta}{\hat{F}}_{{\bm{k}},\beta}$, where ${\hat{F}}_{{\bm{k}}}={}^{t}({\hat{f}}_{{\bm{k}},1},{\hat{f}}_{{\bm{k}},2},{\hat{f}}_{{\bm{k}},3},{\hat{f}}_{{\bm{k}},4})$ [the unit cell, with the four sites $1=1,\ldots,4$ is shown in Fig. 1(a)]. Here, t denotes transposition, while ($a$ is the side of the unit cell) $\displaystyle H^{(0)}_{\bm{k}}$ $\displaystyle=$ $\displaystyle\big{[}t_{0}+t_{1}\cos(k_{x}a)\big{]}\openone\otimes\sigma^{x}+\big{\\{}t_{1}\sin(k_{y}a)\sigma^{x}$ (6) $\displaystyle+$ $\displaystyle\big{[}t_{0}-t_{1}\cos(k_{y}a)\big{]}\sigma^{y}+t_{1}\sin(k_{x}a)\sigma^{z}\big{\\}}\otimes\sigma^{y}.$ Here, $\openone$ is the $2\times 2$ identity matrix and $\sigma^{a}$ ($a=x,y,z$) are Pauli matrices. Fig. 2(a) shows the eigenvalues of Eq. (6) for two values of $J_{1}$ and $J_{2}$. When $J_{1}=J_{2}$, the two bands are doubly-degenerate and touch linearly at the points $(\pm\pi/a,0)$. A gap opens when $J_{2}\neq J_{1}$. When such system is made finite, it is seen to transition from trivial insulator to SOTI Benalcazar _et al._ (2017a) at $J_{1}=J_{2}$. In Fig. 2(b)-(c), we show the lowest eigenstates of Eq. (Majorana corner states in square and Kagome quantum spin liquids), defined on a square lattice with four unit cells along each edge, for $J_{2}>J_{1}$. These are zero-energy corner states, each of which is doubly degenerate and exhibits a periodic modulation along the edges reminiscent of SSH edge states. In fact the weights of corner states alternate between finite and zero going along each of the two neighboring edges. Edge states also appear at finite energy. Unlike corner states, these are four-fold degenerate. More intriguing is the configuration in which $J_{1}^{\prime}=J_{2}$ and $J_{2}^{\prime}=J_{1}$, which imply $t_{\ell}=t_{m}=J_{1}+J_{2}$ and $\Delta_{\ell}=(-)^{\ell}(J_{1}-J_{2})$. In this case, both the hopping part and the p-wave pairing in Eq. (Majorana corner states in square and Kagome quantum spin liquids) are characterized by a $\pi$-flux structure. The spin system is therefore mapped into spinless fermions under the simultaneous effect of $\mathbb{Z}_{2}$ fluxes and p-wave pairing exhibiting a $\pi$-flux structure (i.e. an Abrikosov lattice of “half vortices” commensurate to the square lattice). For a system with periodic boundary conditions, the Hamiltonian is $\displaystyle{\hat{\cal H}}_{\rm M}=\sum_{\bm{k}}\left({\hat{F}}_{{\bm{k}}}^{\dagger},{\hat{F}}_{-{\bm{k}}}\right)\left(\\!\begin{array}[]{cc}H^{(0)}_{\bm{k}}&\Delta_{\bm{k}}\vspace{0.1cm}\\\ \Delta^{\dagger}_{\bm{k}}&-{}^{t}H^{(0)}_{-{\bm{k}}}\end{array}\\!\right)\left(\\!\\!\begin{array}[]{c}{\hat{F}}_{{\bm{k}}}\vspace{0.1cm}\\\ {\hat{F}}_{-{\bm{k}}}^{\dagger}\end{array}\\!\\!\right),$ (11) where $\displaystyle\Delta_{\bm{k}}$ $\displaystyle=$ $\displaystyle-i\big{[}\Delta_{0}+\Delta_{1}\cos(k_{x}a)\big{]}\openone\otimes\sigma^{y}+i\big{\\{}\Delta_{1}\sin(k_{y}a)\sigma^{x}$ (12) $\displaystyle+$ $\displaystyle\big{[}\Delta_{0}-\Delta_{1}\cos(k_{y}a)\big{]}\sigma^{y}+\Delta_{1}\sin(k_{x}a)\sigma^{z}\big{\\}}\otimes\sigma^{x}.$ Since all hopping amplitudes are equal, if the pairing is artificially made to vanish the band structure of Eq. (11) exhibits no gap, exactly as in Fig. 2(a). The pairing opens a gap and its vortex structure enables the localization of Majorana particles at the corners of finite systems [analogously to Figs. 2(b)-(c)]. Majorana particles are present for all $J_{1}$ and $J_{2}$, except when $J_{1}=J_{2}$, for which the gap that stabilizes corner states closes. All these findings are made clearer by observing that Eq. (4) describes two copies of the same “Majorana SOTI” Benalcazar _et al._ (2017a), one for the ${\hat{c}}$\- and one for ${\hat{d}}-$particles. When $J_{5}=0$, the two are independent and exhibit the SOTI transition at $J_{1}=J_{2}$ and $J_{1}^{\prime}=J_{2}^{\prime}$, respectively. When $J_{1}^{\prime}=J_{1}$ and $J_{2}^{\prime}=J_{2}$, both ${\hat{c}}$\- and ${\hat{d}}$-Majorana particles localize at the corners for $J_{2}>J_{1}$ (when the number of plaquettes per side is odd). Therefore, as shown above, corner states have fermionic statistics. On the contrary, when $J_{1}^{\prime}=J_{2}$ and $J_{2}^{\prime}=J_{1}$, ${\hat{c}}$\- and ${\hat{d}}$-Majorana particles localize at the corners on opposite sides of the transition point $J_{1}=J_{2}$. Therefore, for $J_{2}>J_{1}$ ($J_{2}<J_{1}$) only ${\hat{c}}$\- (${\hat{d}}$-)Majorana particles localize at the corners. Hence, the system is a Majorana SOTI for all values of $J_{1}\neq J_{2}$. Note that this renders $J_{5}$ largely ineffective 333See also the Supplemental Online Material. Such coupling pairs Majorana particles of different kinds at the same site. However, for any value of $J_{1}\neq J_{2}$ only one type of particle appears at the corners. Kagome lattice—The ground state of the Kagome lattice is not analytically known. For this reason, we explore all four configurations shown in Fig. 1(c)-(f). It was indeed shown in Ref. Chua _et al._ (2011) that these have very similar energies, and which of them is the ground state can depend on the size of the lattice. For the sake of presentation, we consider the triangular lattice shown in Fig. 3. We diagonalize only the ${\hat{c}}$-part of Eq. (3). In fact, when $J_{5}=0$ the ${\hat{c}}$\- and ${\hat{d}}$-particles are completely independent and the behavior of the latter can be deduced by that of the former by replacing $J_{1}\to J_{1}^{\prime}$ and $J_{2}\to J_{2}^{\prime}$. \begin{overpic}[width=67.21056pt]{fig3a} \put(0.0,-10.0){(a)} \end{overpic} | \begin{overpic}[width=67.21056pt]{fig3b} \put(0.0,-10.0){(b)} \end{overpic} | \begin{overpic}[width=67.21056pt]{fig3c} \put(0.0,-10.0){(c)} \end{overpic} ---|---|--- Figure 3: Edge and corner states for flux pattern Fig 1(e). The color scheme is the same as in Fig. 2(b)-(c). Panel (a) A typical type-one finite energy edge state, obtained with $J_{1}=0.1$ and $J_{2}=1$. Panel (b) one of the zero-energy corner states, obtained with the same parameters as Panel (a). Panel (c) Type-two edge state with $J_{1}=1$ and $J_{2}=0.1$. In all four cases we find edge states as in Fig. 3(a) coexisting with the zero-energy corner states of Fig. 3(b) whenever $J_{1}<J_{2}$. We name such states “type one”, to distinguish them from those of Fig. 3(c) (see below). A typical type-one edge state is localized along a series of lines parallel to the edge, and, similar to those of the SSH model, its weights alternate between zero and nonzero values as one moves farther into the bulk. Similarly, the weight of type-one corner states alternates between zero and nonzero values along the two neighboring edges. Contrary to all other configurations, that of Fig. 1(e) exhibits a second type of edge states, which we name “type two”. These edge states are considerably more complicated than the type-one discussed above. Firstly, type-two edge states occur when one of $J_{1},J_{2}$ is significantly larger than the other (at $J_{1}=J_{2}$ the system is gapless). Secondly, they are localized along a line of triangles characterized by strong (either $J_{1}$ or $J_{2}$) bonds. Conclusion—In this manuscript we have studied two spin-$3/2$ models, defined on Kagome Chua _et al._ (2011) and square Yao _et al._ (2009) lattices, that support QSL phases at low temperature. As in Kitaev’s work Kitaev (2006), spins are fractionalized by means of Majorana particles, in this case six of them. Four of them give rise to a $\mathbb{Z}_{2}$ gauge potential on top of which the remaining two propagate Chua _et al._ (2011); Yao _et al._ (2009). In the case of the square lattice, the ground-state $\mathbb{Z}_{2}$ flux configuration is exactly known thanks to a theorem by Lieb Lieb (1994), and therefore this problem can be solved analytically. On the contrary, since the Kagome lattice is not bipartite, its ground-state flux configuration is unknown. To obviate this problem, we have studied the four possible gauge- inequivalent flux configurations commensurate to the unit cell. We find that both models can support topologically-protected corner states when magnetic couplings are made unequal. Such states are fermions when both free particles resulting from the spin fractionalization are able to localize at the corners. On the contrary, when only one of the two species can localize, the corner states are Majorana particles. Notably, in the latter case such quasiparticles are protected against perturbation that locally mix the two kinds of free Majorana particles. We observe that the ground-state energy is lowered by making the magnetic couplings unequal. Therefore, we speculate that real systems could undergo lattice distortions in order to lower their magnetic energy and simultaneously localize fermionic or Majorana states at their corners. In the Kagome lattice, corner states coexist with edge states, a distinctive trait of the chiral QSL realized in such system Chua _et al._ (2011). We find that all four flux configurations exhibit the same type of edge and corner states, but one of them supports also a second set of edge states which have no analog in the other three. We conclude by noting that corner Majorana states have been predicted to occur in a similar spin-$3/2$ model defined on a Shastry-Sutherland lattice Dwivedi _et al._ (2018). The latter is akin to our Kagome model: it exhibits both chiral and gapped QSL phases but the ground state is not exactly known since Lieb’s theorem does not apply. On the contrary, the ground state flux configuration of the square-lattice model we study is exactly known. We believe that this fact lends credibility to the possibility of localizing fermionic or Majorana particles at corners of QSLs. Furthermore, we hope that the relative simplicity of the lattices studied here and their relative abundance in nature will stimulate the search of material realizations of second-order QSL supporting topologically-protected corner states. Acknowledgments—A.P. acknowledges support from the European Commission under the EU Horizon 2020 MSCA-RISE-2019 programme (project 873028 HYDROTRONICS). ## Appendix A Protection of edge states against $J_{5}$ \begin{overpic}[width=106.23808pt]{figS1a} \put(0.0,-10.0){(a)} \end{overpic} | \begin{overpic}[width=106.23808pt]{figS1b} \put(0.0,-10.0){(b)} \end{overpic} ---|--- Figure 4: Energy spectrum of the square model as a function of $J_{5}$. In both figures, $J_{1}=0.3$ and $J_{2}=1$. Panel (a) Spectrum with $J_{1}^{\prime}=J_{1}$ and $J_{2}^{\prime}=J_{2}$. Panel (b) Spectrum with $J_{1}^{\prime}=J_{2}$ and $J_{2}^{\prime}=J_{1}$. It is evident that Majorana corner states [Panel (b)] are protected against the inter-species coupling $J_{5}$ up to $|J_{5}|\sim 1.3$. In this Supplemental Material we show that Majorana corner states are robust against $J_{5}$ by computing the energy spectrum of the square model for both particle species as a function of $J_{5}$. In the case where $J_{1}^{\prime}=J_{1}=0.3$ and $J_{2}^{\prime}=J_{2}=1$ (fermionic corner states), we find that the energies of corner states split away from zero linearly as $J_{5}$ increases in strength, as shown in Fig. 4(a). On the contrary, when $J_{1}^{\prime}=J_{2}=1$ and $J_{2}^{\prime}=J_{1}=0.3$ (Majorana corner states), the corner states remain at zero energy for $|J_{5}|\lesssim 1.3$. We conclude that on-site mixing the of the two species is forbidden for small-to-moderate inter-species couplings as expected. ## References * Anderson (1973) P. Anderson, Materials Research Bulletin 8, 153 (1973). * Zhou _et al._ (2017) Y. Zhou, K. Kanoda, and T.-K. Ng, Rev. Mod. Phys. 89, 025003 (2017). * Savary and Balents (2016) L. Savary and L. Balents, Reports on Progress in Physics 80, 016502 (2016). * Wen (2004) X. G. Wen, _Quantum Field Theory of Many-Body Systems: From the Origin of Sound to an Origin of Light and Electrons_ (Oxford University Press, New York, 2004). * Kitaev (2006) A. Kitaev, Annals of Physics 321, 2 (2006). * Banerjee _et al._ (2017) A. Banerjee, J. Yan, J. Knolle, C. A. Bridges, M. B. Stone, M. D. Lumsden, D. G. Mandrus, D. A. Tennant, R. Moessner, and S. E. Nagler, Science 356, 1055 (2017). * Kitaev (2003) A. Kitaev, Annals of Physics 303, 2 (2003). * Sarma _et al._ (2015) S. D. Sarma, M. Freedman, and C. Nayak, npj Quantum Information 1, 15001 (2015). * Hoffman _et al._ (2016) S. Hoffman, C. Schrade, J. Klinovaja, and D. Loss, Phys. Rev. B 94, 045316 (2016). * Lian _et al._ (2018) B. Lian, X.-Q. Sun, A. Vaezi, X.-L. Qi, and S.-C. Zhang, Proceedings of the National Academy of Sciences 115, 10938 (2018). * Knolle (2016) J. Knolle, _Dynamics of a Quantum Spin Liquid_ (Springer, Heidelberg, 2016). * Benalcazar _et al._ (2017a) W. A. Benalcazar, B. A. Bernevig, and T. L. Hughes, Science 357, 61 (2017a). * Benalcazar _et al._ (2017b) W. A. Benalcazar, B. A. Bernevig, and T. L. Hughes, Phys. Rev. B 96, 245115 (2017b). * Schindler _et al._ (2018a) F. Schindler, A. M. Cook, M. G. Vergniory, Z. Wang, S. S. P. Parkin, B. A. Bernevig, and T. Neupert, Science Advances 4 (2018a), 10.1126/sciadv.aat0346. * Langbehn _et al._ (2017) J. Langbehn, Y. Peng, L. Trifunovic, F. von Oppen, and P. W. Brouwer, Phys. Rev. Lett. 119, 246401 (2017). * Geier _et al._ (2018) M. Geier, L. Trifunovic, M. Hoskam, and P. W. Brouwer, Phys. Rev. B 97, 205135 (2018). * Ezawa (2018a) M. Ezawa, Phys. Rev. Lett. 120, 026801 (2018a). * Song _et al._ (2017) Z. Song, Z. Fang, and C. Fang, Phys. Rev. Lett. 119, 246402 (2017). * Ezawa (2018b) M. Ezawa, Phys. Rev. B 97, 155305 (2018b). * Ezawa (2018c) M. Ezawa, Phys. Rev. B 97, 241402 (2018c). * Serra-Garcia _et al._ (2018) M. Serra-Garcia, V. Peri, R. Süsstrunk, O. R. Bilal, T. Larsen, L. G. Villanueva, and S. D. Huber, Nature 555, 342 (2018). * Imhof _et al._ (2018) S. Imhof, C. Berger, F. Bayer, J. Brehm, L. W. Molenkamp, T. Kiessling, F. Schindler, C. H. Lee, M. Greiter, T. Neupert, and R. Thomale, Nature Physics 14, 925 (2018). * Peterson _et al._ (2018) C. W. Peterson, W. A. Benalcazar, T. L. Hughes, and G. Bahl, Nature 555, 346 (2018). * El Hassan _et al._ (2019) A. El Hassan, F. K. Kunst, A. Moritz, G. Andler, E. J. Bergholtz, and M. Bourennane, Nature Photonics 13, 697 (2019). * Schindler _et al._ (2018b) F. Schindler, Z. Wang, M. G. Vergniory, A. M. Cook, A. Murani, S. Sengupta, A. Y. Kasumov, R. Deblock, S. Jeon, I. Drozdov, H. Bouchiat, S. Guéron, A. Yazdani, B. A. Bernevig, and T. Neupert, Nature Physics 14, 918 (2018b). * Luo and Zhang (2019) X.-W. Luo and C. Zhang, Phys. Rev. Lett. 123, 073601 (2019). * Liu _et al._ (2018) T. Liu, J. J. He, and F. Nori, Phys. Rev. B 98, 245413 (2018). * Zhu (2018) X. Zhu, Phys. Rev. B 97, 205134 (2018). * Laubscher _et al._ (2019) K. Laubscher, D. Loss, and J. Klinovaja, Phys. Rev. Research 1, 032017 (2019). * Wang _et al._ (2018) Q. Wang, C.-C. Liu, Y.-M. Lu, and F. Zhang, Phys. Rev. Lett. 121, 186801 (2018). * Kheirkhah _et al._ (2020) M. Kheirkhah, Y. Nagai, C. Chen, and F. Marsiglio, Phys. Rev. B 101, 104502 (2020). * Yan (2019) Z. Yan, Phys. Rev. Lett. 123, 177001 (2019). * Dwivedi _et al._ (2018) V. Dwivedi, C. Hickey, T. Eschmann, and S. Trebst, Phys. Rev. B 98, 054432 (2018). * Note (1) The model of Ref. Dwivedi _et al._ (2018) can be solved exactly by mapping spins into Majorana particles which are free to propagate on top a quenched $\mathbb{Z}_{2}$ gauge potential. However, the ground-state flux configuration is not analytically known and must be found with numerical techniques. * Chua _et al._ (2011) V. Chua, H. Yao, and G. A. Fiete, Phys. Rev. B 83, 180412 (2011). * Yao _et al._ (2009) H. Yao, S.-C. Zhang, and S. A. Kivelson, Phys. Rev. Lett. 102, 217202 (2009). * Lieb (1994) E. H. Lieb, Phys. Rev. Lett. 73, 2158 (1994). * Note (2) Strictly speaking, Lieb’s theorem holds for periodic systems. Following Kitaev Kitaev (2006), we will assume that the ground state of a large open lattice coincides with that of the associated periodic system. * Note (3) See also the Supplemental Online Material.
8k
arxiv_papers
2101.01185
aainstitutetext: Institute for Theoretical Physics, University of Amsterdam, PO Box 94485, 1090 GL Amsterdam, The Netherlandsbbinstitutetext: Institute of Physics, Jagiellonian University, 30-348 Kraków, Polandccinstitutetext: Instituto de Física Téorica IFT-UAM/CSIC, Universidad Autonoma de Madrid, 28049, Madrid, Spainddinstitutetext: Max Planck Institute for Gravitational Physics (Albert Einstein Institute), 14476 Potsdam-Golm, Germany # Spacetime as a quantum circuit A. Ramesh Chandra a Jan de Boer b,c Mario Flory d,1 Michal P. Heller,111On leave of absence from: National Centre for Nuclear Research, 02-093 Warsaw, Poland a Sergio Hörtner a and Andrew Rolph [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] ###### Abstract We propose that finite cutoff regions of holographic spacetimes represent quantum circuits that map between boundary states at different times and Wilsonian cutoffs, and that the complexity of those quantum circuits is given by the gravitational action. The optimal circuit minimizes the gravitational action. This is a generalization of both the “complexity equals volume” conjecture to unoptimized circuits, and path integral optimization to finite cutoffs. Using tools from holographic $T\bar{T}$, we find that surfaces of constant scalar curvature play a special role in optimizing quantum circuits. We also find an interesting connection of our proposal to kinematic space, and discuss possible circuit representations and gate counting interpretations of the gravitational action. ## 1 Introduction Quantum information theoretic concepts such as entanglement entropy have proven to be of fundamental importance for our understanding of quantum gravity, most notably in the context of the AdS/CFT correspondence Ryu:2006bv ; Maldacena:1997re . However, it has also been claimed that “entanglement is not enough” Susskind:2014moa , such as in the inability of holographic entanglement entropy to probe the late time linear growth of the Einstein- Rosen bridge in eternal black holes, and that other concepts, in particular state complexity, are needed for a more complete understanding. The notion of state complexity in quantum mechanics refers to a setup where one is given an initial state, a final state, a margin of error and a list of allowed unitary operations. The smallest number of unitaries needed to obtain the final state from the initial state up to the margin of error is an indication of how difficult it is to obtain the final state from the initial state. If the initial state is a fixed and very simple state, i.e. an unentangled product state, one can simply refer to the complexity of the final state as state complexity. The idea to build interesting quantum states using a limited set of operations has been successful in condensed matter physics leading to e.g. tensor network representations of states Orus:2013kga . Moreover, a specific relation between tensor networks and gravity has been proposed Swingle:2009bg ; Swingle:2012wq in which one interprets constant time slices of AdS spacetime as a so-called multiscale entanglement renormalization (MERA) tensor network Vidal:2008zz . As a first piece of evidence for this relation one notices that the tensor network in question indeed closely resembles a lattice discretization of an equal time slice of AdS. If one imagines that this MERA tensor network is the optimal network to obtain the ground state of a (discretized) CFT, then the number of tensors needed equals the volume of the equal time slice. This subsequently led to a much more general “complexity equals volume” proposal Stanford:2014jda where one proposes that the complexity of any viable state in holographic quantum field theories can be obtained from the minimal volume of a slice of the geometry which is anchored at the relevant fixed boundary time slice. Other complexity proposals include those where complexity is computed from the action Brown:2015bva ; Brown:2015lvg or spacetime volumes Couch:2016exn evaluated in bulk Wheeler-deWitt (WdW) patches. All these holographic complexity proposals share certain qualitative features that any notion of state complexity should possess, while still lacking a precise microscopic definition in the dual CFT. A continuum version of MERA Haegeman:2011uy was an important factor in the realization of Jefferson:2017sdb ; Chapman:2017rqy , that a natural way to count gates and define complexity in QFT is by assigning a metric to a suitable group underlying the state preparation of interest Nielsen1133 . While this is arguably the most promising way to define and prove holographic complexity proposals, or to find other gravitational manifestations of complexity, in this paper we will not directly attempt to find a precise microscopic definition of complexity in holographic CFTs. Instead, we will propose a significant refinement of the relation between geometry and complexity as follows: we suggest that any spacetime region can be interpreted as a quantum circuit, with the gravitational action providing a notion of complexity for this particular quantum circuit. Figure 1: We consider a subregion $M$ of Euclidean Poincaré AdS3. We introduce two time-slices $t=t_{i}$ and $t=t_{f}$ corresponding to the field theory ground states $|0\rangle_{z_{i}}$ and $|0\rangle_{z_{f}}$, which are prepared for different values of the radial cutoff. The radial boundary is at finite cutoff, $z=\rho(t)$. Our proposal is that the complexity of the circuit that maps between these ground states with different finite Wilsonian cutoffs is given by the gravitational action on $M$. Let us make our proposal more precise. Take an Euclidean asymptotically AdS geometry with radial coordinate $z$, the asymptotic boundary being at $z=0$, and a spacetime region $M$ given by $t_{i}\leq t\leq t_{f}$ and $z\geq\rho(t)$ for some function $\rho(t)$, see figure 1. The AdS geometry describes the time evolution of a given state, and the region $z\geq\rho(t)$ knows about the state, but only up to a UV cutoff set by $\rho(t)$. Here, we use the well- known relation between the radial distance in AdS and the UV cutoff in the theory Susskind:1998dq and its recent refinement McGough:2016lol . The latter development links gravity in AdS spacetimes with a finite radial cutoff to finite irrelevant deformations of dual CFTs by a $T\bar{T}$ operator, and this will be a useful way of thinking about the UV cutoff in the remainder of this paper. The bulk geometry of interest can be thought of as describing a sequence of states, which are related to each other by both Euclidean time evolution and a change of UV cutoff. One can ask what the complexity of this particular process is, i.e. how many operations would be required to recover a (discretized version of) this sequence of states. For previous work combining holographic $T\bar{T}$ and complexity, see Jafari:2019qns ; Geng:2019yxo ; Chen:2019mis (see also Chakraborty:2020fpt for a related development). We propose that the number of operations, the complexity, is given by the gravitational action itself, evaluated with suitable boundary condition on the boundary of the spacetime region. Our proposal, after specializing to two boundary dimensions, and keeping only the first two orders in a Taylor expansion of the Wilsonian cutoff, coincides with the path integral optimization proposal Caputa:2017urj ; Caputa:2017yrh ; Czech:2017ryf ; Takayanagi:2018pml ; Camargo:2019isp , and so for holographic CFTs can be seen as generalization of that proposal to any dimension and finite cutoff. In path integral optimization one considers different preparations of a fixed state using CFT path integrals on background geometries differing by a Weyl factor. The proposal is to regard the change in the unnormalized path integral measure – which for AdS3 is given by the Liouville action Polyakov:1981rd – as a cost function. However, as recognized in Camargo:2019isp , minimization of such a cost function is not consistent with keeping only the terms which remain finite as the UV cutoff is removed. Our proposal avoids this problem altogether by considering the full gravitational action, with the Liouville action merely capturing the leading two terms in the limit where one takes the cutoff to infinity. Coming back to our tensor network motivation, it is interesting to note that in the course of the past several years substantial progress has been achieved in obtaining MERA from a systematic coarse-graining procedure rather than using it merely as an efficient variational ground state ansatz for critical spin chains. The key idea is to employ an entanglement-based coarse-graining of the discretized Euclidean path-integral Evenbly_2015 . This procedure is closely related to one where one puts the corresponding conformal field theories, which arise in the continuum limit of the critical spin chains, on curved geometries Milsted:2018yur ; Milsted:2018san . We find the apparent connection between these ideas and our setup quite intriguing. In particular, in the language of Milsted:2018yur one would be tempted to call part of our circuit associated with moving in $t$ as composed from “euclideons”, whereas the motion in the $z$ direction would have to do with “isometries” and possibly “disentanglers”. It would be certainly very interesting to make this association more quantitative, perhaps using the results of Kruthoff:2020hsi , as there are currently several distinct proposals for associating geometries to MERA. It has been suggested to connect MERA to a hyperbolic geometry of an equal time slice of AdS as mentioned above, to a light-cone Milsted:2018san and to an auxiliary dS geometry Czech:2015kbp as in Beny:2011vh . The latter was motivated by work trying to probe the bulk geometry using non-local CFT observables such as entanglement entropy of spherical subregions, which gave rise to the kinematic space program Czech:2015qta ; Czech:2016xec ; deBoer:2016pqk . This confusing state of affairs was one of our motivations to try to sharpen the relation between gravity and quantum circuits. It should be also noted that a more precise relation between gravity and (other than MERA) tensor networks was proposed recently in Bao:2018pvs ; Bao:2019fpq . It is important to point out that we are not considering arbitrary circuits: all circuits are essentially composed of time evolution and changes in the local cutoff starting with a given initial state. One could certainly imagine more general circuits, but these would not be captured by a single semi- classical geometry and would require off-shell gravitational configurations. The latter are typically exponentially suppressed and we will not consider them in this paper. One can still try to find the optimal circuit within a given semi-classical geometry, by varying over $t_{i}$ and over $\rho(t)$. In particular, taking $\rho(t_{i})=\infty$ corresponds to a CFT state where the CFT has a momentum cutoff brought down to zero, so this is akin to making the state at $t=t_{f}$ from “nothing” Hartle:1983ai . We find that optimizing complexity over this restricted set of circuits gives results quite similar to other holographic complexity proposals. Perhaps in these holographic situations, there is nothing to be gained (from a complexity point of view) by considering circuits that involve different semiclassical geometries. Finally, let us emphasize that there was earlier work, such as Nozaki:2012zj ; Miyaji:2015fia ; Takayanagi:2018pml ; Belin:2018bpg ; Belin:2020oib ; Boruch:2020wax ; Caputa:2020fbc , advocating for the relation between quantum circuits and holographic geometry. In the present work we are building on these earlier developments to bring in three important new elements into this discussion: being explicit about an initial state, realizing the need of keeping UV cut-off finite and interpreting it in terms of a $T\bar{T}$ deformation and, last but not least, making a connection with the kinematic space program. The outline of this paper is as follows. In section 2, we will first describe the general setup and then give an explicit example in vacuum AdS3, where we will find that our notion of complexity agrees with the complexity equals volume proposal and, in the limit $\rho(t)\rightarrow 0$, also with path- integral optimization. Subsequently, we discuss various finer points associated with our proposal in sections 2.2 and 2.3. This brings us to section 3, where we describe the relation between a change in the spacetime region and $T\bar{T}$-deformations using bulk flow equations. Considerations in this section will also allow us to argue that complexity is optimized if the boundary of the spacetime region has constant scalar curvature. Finally, we will discuss some ideas to more directly connect the gravitational action to a gate counting procedure in section 4, and end with some conclusions and suggestions for future work in section 5. ## 2 Vacuum preparation using gravity The main idea of our construction is as follows: We can produce states in a CFT using path integrals over Euclidean manifolds with a boundary and operator insertions. Similarly, path integrals over manifolds with two boundaries can be interpreted as objects mapping states to states. We would like to think of these path integrals as describing a circuit which prepares states or maps states to states, and associate a notion of cost function to them which measures the effort it takes to perform these CFT operations in a given way. To define such a cost function it seems inevitable to introduce some sort of cutoff in the field theory. This cutoff defines a local lattice spacing and provides the natural length scale at which to define tensors which make up an approximate tensor network description of the CFT operation. The cutoff could in principle be space and time dependent. To determine the complexity, we are going to propose to use the unnormalized CFT path integral. An important issue is how to incorporate the space and time dependent cutoff in this computation. In the field theory, one could try to implement this by including a space and time dependent $T\bar{T}$-deformation in the CFT which is known to implement a particular type of finite cutoff Smirnov:2016lqw ; Cavaglia:2016oda . It seems difficult to compute the required path integral directly in the CFT, but luckily, for CFTs with a holographic dual we can use the AdS/CFT correspondence to do the computation. Following McGough:2016lol , the relevant AdS setup is one where we move the boundary of AdS a finite distance inward, with the time (and possibly space) dependent radial position corresponding to the cutoff or coefficient of the $T\bar{T}$ deformation. The partition function of the CFT with cutoff is then computed, to leading order, by the on-shell value of the gravitational action with a finite instead of an asymptotic boundary. There are some aspects of this proposal that require clarification. One is the choice of boundary for the gravitational path integral away from the surface where the cutoff CFT lives. For example, if the cutoff CFT lives on a hemisphere, we need to fill in the boundary of the hemisphere in AdS. There is in general no canonical slice in the bulk where the state “lives”. In the example that we consider below, there are always natural time-symmetric surfaces in the bulk which are the natural surfaces where to bound the bulk path integral. Another issue with the construction is whether or not to include the standard counterterms for AdS/CFT for co-dimension one boundaries when evaluating the bulk action. Due to the existence of a finite cutoff, there is no strict need to do so, and not including them appears to be the most natural thing to do as we discuss below. A closely related issue comes from the fact that the full bulk region has corners, and one may need to include corner terms when evaluating the bulk action. We will also address this issue below. ### 2.1 Action calculation For simplicity and concreteness, we are going to consider the preparation of the ground state of a 2d CFT on a line using the Euclidean path integral. To this end, we take the standard Euclidean AdS solution, with the curvature scale $l_{AdS}=1$, $\displaystyle ds^{2}=\frac{dz^{2}+dt^{2}+dx^{2}}{z^{2}},$ (1) and the partition function of the CFT equals the exponent of minus the on- shell bulk action $\displaystyle I=\frac{1}{\kappa}\int_{M}d^{3}x\,\sqrt{G}\left(R+2\right)+\frac{2}{\kappa}\int_{\partial M}d^{2}x\,\sqrt{g}K+I_{c}.$ (2) $M$ is the bulk region bounded by $\rho(t)\leq z\leq\infty$ and $t_{i}\leq t\leq t_{f}$, as shown in figure 1. The a priori finite function $\rho(z)$ interpolates between the values $z=z_{i}$ at $t=t_{i}$ and $z=z_{f}$ at $t=t_{f}$, with $t_{i}\leq t_{f}$ and $z_{f}<z_{i}$. For simplicity we also take the setup to be independent of the transverse direction $x$. Furthermore, we write $\kappa=16\pi G_{N}$, $G$ for the 3d metric on $M$, $g$ for the induced 2d metric on $\partial M$, and $K$ is the trace of the extrinsic curvature. $\partial M$ is only piecewise smooth and has a kink or joint at $t=t_{f}$ and $t=t_{i}$ as shown in figure 1. Each joint contributes a term $\displaystyle I_{c}=\frac{2}{\kappa}\int dx\sqrt{j}\ \alpha$ (3) to the gravitational action. Herein, $\sqrt{j}$ is the length element along the joint and $\alpha$ is simply the angle between the two normal vectors of the two surfaces coming together at the joint (which may have either sign). Joint-terms of this type were studied by Hayward in Hayward:1993my ; Brill:1994mb , but in the Euclidean setting, which is of interest here, this was already done earlier in Hartle:1981cf , see also the discussion in Lehner:2016vdi . As discussed above, we are going to interpret the on-shell value of the bulk effective action of the region $M$ as the complexity of the circuit defined by the surface $z=\rho(t)$ which maps the vacuum state $|0\rangle_{z_{i}}$ with cutoff $z_{i}$ to the vacuum state $|0\rangle_{z_{f}}$ with cutoff $z_{f}$. If we use the relation between a finite radial cutoff and the coefficient $\mu$ of the $T\bar{T}$ deformation via McGough:2016lol , $\mu(t)=\kappa\,\rho(t)^{2},$ (4) we can reinterpret the states $|0\rangle_{\rho(t)}$ as ground states of the $T\bar{T}$ deformed CFT with a time-dependent coefficient $\mu(t)$. Concretely, the induced line element on the boundary surface is $\displaystyle ds^{2}=\frac{(1+\dot{\rho}^{2})dt^{2}+dx^{2}}{\rho^{2}},$ (5) its Ricci scalar reads $\displaystyle R^{(d-1)}=\frac{2(\rho\ddot{\rho}-\dot{\rho}^{2}(1+\dot{\rho}^{2}))}{(1+\dot{\rho}^{2})^{2}},$ (6) the trace of the extrinsic curvature reads $\displaystyle K=\frac{\rho\ddot{\rho}+2(1+\dot{\rho}^{2})}{(1+\dot{\rho}^{2})^{3/2}},$ (7) and from (2) we obtain $\displaystyle I$ $\displaystyle=\frac{-4}{\kappa}\int_{M}d^{2}x\int_{z=\rho}^{\infty}\frac{dz}{z^{3}}+\frac{2}{\kappa}\int_{\partial M}d^{2}x\,\frac{\rho\ddot{\rho}+2(1+\dot{\rho}^{2})}{\rho^{2}(1+\dot{\rho}^{2})}+I_{c}$ $\displaystyle=\frac{2V_{x}}{\kappa}\int_{t_{i}}^{t_{f}}dt\,\frac{\rho\ddot{\rho}+(1+\dot{\rho}^{2})}{\rho^{2}(1+\dot{\rho}^{2})}+I_{c}$ (8) for the on-shell bulk action, where we have introduced $V_{x}=\int dx$. For the corner term, we also find $\displaystyle I_{c}=\frac{2V_{x}}{\kappa}\left(\frac{\pi/2-\arctan{\dot{\rho}(t_{f})}}{z_{f}}+\frac{\pi/2+\arctan{\dot{\rho}(t_{i})}}{z_{i}}\right).$ (9) Integrating by parts, this action can be written only using first derivatives of $\rho$, yielding $\displaystyle I=$ $\displaystyle\frac{2V_{x}}{\kappa}\int_{t_{i}}^{t_{f}}dt\,\left(\frac{1}{\rho^{2}}+\frac{\dot{\rho}\arctan{\dot{\rho}}}{\rho^{2}}\right)+\frac{\pi V_{x}}{\kappa}\left(\frac{1}{z_{f}}+\frac{1}{z_{i}}\right).$ (10) The terms which are independent of $\rho$ do not affect the equations of motion, and can always be removed by a suitable counter term, which we will assume to be done from now on. We believe this is justified, as it is known Brill:1994mb ; Lehner:2016vdi that the joint term can spoil the additivity of the action under combining bulk regions, which besides the formulation of a well defined variational principle is usually the second main reason for adding boundary terms to the action (2).222 Note that in our Euclidean setting, where spacelike surfaces have spacelike normal vectors, the joints under consideration are more similar to the timelike joints discussed in Brill:1994mb ; Lehner:2016vdi than spacelike ones in a Lorentzian setting. The equations of motion obtained by extremizing (10) read $\displaystyle\frac{\rho\ddot{\rho}+(1+\dot{\rho}^{2})}{\rho^{3}(1+\dot{\rho}^{2})^{2}}=0.$ (11) The most immediately visible solution to this equation is the one where we formally take the limit $\dot{\rho}\rightarrow\infty$. This corresponds to the boundary surface turning into an equal-time slice, which is in fact where, based on the intuition surrounding holographic complexity and tensor networks, we expect the most optimised circuit preparing the state $|0\rangle_{z_{f}}$ to live, see e.g. Boruch:2020wax . The generic solution to (11) reads $\displaystyle\rho(t)=\sqrt{\mathcal{R}^{2}-(t-t_{0})^{2}}$ (12) and describes circular arcs of radius $\mathcal{R}$ centered on the boundary point at $t=t_{0}$. The formal solution $\dot{\rho}\rightarrow\infty$ corresponds to the limit of infinite radius. Our proposal is that the Euclidean action (10) (excluding the $\rho$-independent remnants of the joint terms) is a measure of the complexity of preparing the state $|0\rangle_{z_{f}}$ from the state $|0\rangle_{z_{i}}$ using the circuit described by $\rho(t)$. The optimal circuit, with fixed Euclidean time distance $\Delta t=|t_{f}-t_{i}|$, is then of the form (12), and the complexity of this circuit is given by evaluating the Euclidean action on this solution. With the explicit boundary conditions being $\rho(t_{f})=z_{f}$ and $\rho(t_{i})=z_{i}$, the value of the Euclidean action in the first term of (10) is $\displaystyle I=\frac{2V_{x}}{\kappa}\left(\frac{1}{z_{f}}\arctan{\frac{z_{i}^{2}-z_{f}^{2}+\Delta t^{2}}{2z_{f}\Delta t}}-\frac{1}{z_{i}}\arctan{\frac{z_{i}^{2}-z_{f}^{2}-\Delta t^{2}}{2z_{i}\Delta t}}\right).$ (13) Note that this result comes entirely from the corner terms, as the first term in (8) exactly vanishes on-shell. Interpreting it as a function of the variable $t_{i}\leq t_{f}$ while keeping $z_{i}\neq z_{f}$ fixed, we can verify that the above expression is minimized by $t_{i}=t_{f}$. This corresponds to the limit $\mathcal{R}\rightarrow\infty$ or $\dot{\rho}\rightarrow\infty$ and hence the equal time slice that is intuitively expected to play a special role in describing the complexity of the state $|0\rangle_{z_{f}}$. Using $1/\kappa=c/24$ Brown:1986nw , the minimum value is given by $\displaystyle I_{min}=\frac{c{\pi}V_{x}}{24}\left(\frac{1}{z_{f}}-\frac{1}{z_{i}}\right),$ (14) which is proportional to the spatial volume of the strip $z_{f}\leq z\leq z_{i}$ on the equal time slice at $t=t_{f}$. Of course, if we send $z_{f}\rightarrow\epsilon\ll 1$ and $z_{i}\rightarrow\infty$, this reproduces the standard result of the volume proposal for the complexity of the CFT ground state. Clearly, this result also vanishes if $z_{i}=z_{f}$, which we take as a non-trivial consistency check and further justification for excluding the remnants of the joint terms in (10). 333As an illustrative example, imagine a Euclidean axisymmetric spacetime, with a spacetime region in the shape of a regular prism that breaks rotational symmetry around the axis to a discrete subgroup. In the limit where the radius of the prism goes to zero, the action on that region may not go to zero, as while bulk and surface terms vanish in this limit due to the vanishing of bulk volume and surface area, the joint terms will lead to a contribution proportional to an integral along the axis of symmetry. This remnant term is the analogue of the last bracket in (10). To close this section, let us compare our results to the ones that can be obtained from the Liouville action. For $\dot{\rho}\ll 1$, equation (10) can be approximated as $I=\frac{2V_{x}}{\kappa}\int dt\left(\frac{1}{\rho^{2}}+\frac{\dot{\rho}^{2}}{\rho^{2}}\right),$ (15) which, assuming no $x$-dependence, is equivalent to the Liouville Lagrangian $S_{L}=\frac{c}{24\pi}\int dt\int dx\left(\eta\,e^{2\omega}+\left(\partial_{t}\omega\right)^{2}+\left(\partial_{x}\omega\right)^{2}\right).$ (16) after a change of variables $\rho(t)\to(1/\sqrt{\eta})\,e^{-\omega(t)}$. Note that the physically interesting solution $\dot{\rho}\rightarrow\infty$ falls outside of the range of applicability of the approximation necessary to obtain the Liouville action from (10). The equations of motion derived from (15) take the form $\displaystyle\frac{\rho\ddot{\rho}+(1-\dot{\rho}^{2})}{\rho^{3}}=0.$ (17) As we will see below, these field equations also arise if we introduce a new time coordinate in order to bring the induced metric on the boundary into conformal gauge. ### 2.2 Conformal time and extremizing the action There is a subtle but crucial difference between our setup discussed in the previous subsection and the calculations of Boruch:2020wax , which we will discuss in this subsection in order to avoid confusion. In order to do so, we note that Boruch:2020wax investigates a setup similar to the one depicted in figure 1, and up to notation (8) also appears in the appendix of that paper. Following Boruch:2020wax , we can now introduce a conformal time $u$, with $\displaystyle du=\sqrt{1+\dot{\rho}(t)^{2}}dt,$ (18) such that the line element (5) is transformed into the conformal gauge form $ds^{2}=\frac{du^{2}+dx^{2}}{\varrho(u)^{2}}.$ (19) Here, we have introduced a new variable such that $\varrho(u(t))=\rho(t)$. Under (18), the action (10) changes to Boruch:2020wax $I=\frac{2V_{x}}{\kappa}\int_{u_{i}[\varrho]}^{u_{f}[\varrho]}du\left(\frac{\sqrt{1-\varrho^{\prime 2}}+\varrho^{\prime}\arcsin{\varrho^{\prime}}}{\varrho^{2}}\right).$ (20) If we were to just identify the integrand in (20) as a Lagrangian and compute naively the Euler equations, we arrive at $\displaystyle\frac{\varrho\varrho^{\prime\prime}+2(1-\varrho^{\prime}{}^{2})}{\varrho^{3}(1-\varrho^{\prime}{}^{2})^{2}}=0,$ (21) which, up to notation and the addition of a nonzero tension term, are the equations which where studied in Boruch:2020wax . The subtlety announced at the beginning of the subsection is that (18) is a reparametrization of time which is dependent on the variable with respect to which we want to vary the action, hence formally in going from (10) to (20) the integration bounds $u_{i}$ and $u_{f}$ become themselves functionals of $\varrho$, and will lead to a nontrivial contribution according to Leibniz’s rule when varying the action. In fact it can be checked that introducing (18) and $\varrho(u)$ in the equation of motion (11) gives a result $\displaystyle\frac{\varrho\varrho^{\prime\prime}+(1-\varrho^{\prime}{}^{2})}{\varrho^{3}}=0$ (22) that is inequivalent to (21). Interestingly, (22) has the form of the Liouville equation (17), just for $\varrho(u)$ instead of $\rho(t)$. The most commonly known example where a field-dependent reparametrization can be useful is the Lagrangian for geodesic motion, which becomes a constant when introducing affine parametrisation. Of course, this does not mean that the equations of motion degenerate, as the full information about the value of the action – i.e. the length of the curve – is now entirely encoded in the integration domain. Unfortunately, the expression (20) rather inelegantly falls into a middle ground between the two possible extremes, as both the integrand and the integration bounds are functionals of the variable $\varrho$, and for this reason we found it intractable to work with. This does not mean that either our work or Boruch:2020wax are wrong, just that we are studying a different variational problem. We work with the action (10) where explicitly we assume Dirichlet boundary conditions for $\rho(t)$ at the fixed values $t=t_{f}$ and $t=t_{i}$, while Boruch:2020wax works with the action (20) with the implicit assumption of Dirichlet boundary conditions for the field $\varrho$ at fixed values of $u_{i},u_{f}$, which is an inequivalent mathematical exercise. ### 2.3 Comparison to AdS/BCFT models We can investigate this issue a bit further. So far, we have essentially considered what amounts to minisuperspace models, by plugging in an ansatz into the action and deriving equations of motion for the function parametrizing that ansatz, instead of first deriving general equations of motion and then simplifying them with a given ansatz. How can we write our equations of motion in a form that is more suggestive for their general meaning and potential origin? We will do this in the next section, but as an aside, we will now demonstrate that the semicircle solutions that we found can also be obtained if we interpret the boundary of the bulk domain as an “end of the world brane” with an energy-momentum tensor describing matter with a very specific equation of state. The covariant equations of motion of this end of the world brane will imply the general equation that we will derive in the next section. The derivation in the next section does not rely on an end of the world brane interpretation, and it remains to be seen whether this agreement is more than a technical coincidence. We should also point out that the work of Boruch:2020wax was strongly influenced by the type of AdS/boundary CFT (BCFT) models introduced in Takayanagi:2011zk ; Fujita:2011fp . In such models the boundary of the space on which the BCFT lives is also extended into the bulk spacetime in the form of an end of the world brane, on which Neumann boundary conditions are imposed. Besides the bulk Einstein equations, this leads to an equation of motion of the form $\displaystyle K_{\mu\nu}-Kg_{\mu\nu}=\frac{\kappa}{2}T_{\mu\nu}$ (23) which determines the embedding of the end of the world brane into the ambient space. These models allow for considerable bottom-up toy-model building freedom, and $T_{\mu\nu}$ is the energy-momentum tensor of any matter that lives in the brane worldvolume. In practice, it is often set to be a constant tension term $\displaystyle T_{\mu\nu}=\lambda\,g_{\mu\nu}$ (24) with tension $\lambda$. As reported in Boruch:2020wax , their equation of motion is consistent with (23). As we ignore tension terms, we would set the right hand side of (23) to zero, and apart from the equal time slice obtained by $\dot{\rho}\rightarrow\infty$, our semicircular embeddings do not satisfy this equation. Interestingly, in a Lorentzian AdS/BCFT context, semicircular embeddings into Poincaré AdS were derived in Erdmenger:2014xya for a simple model of $T_{\mu\nu}$ given by a perfect fluid with equation of state $p=a\sigma$ ($p=$pressure, $\sigma$=energy density) in the limit $a\rightarrow\infty$. So we see that semicircular embeddings into a Poincaré AdS do satisfy an equation of the form (23), just with a specific non-trivial right hand side. Due to the peculiar limit in the parameter $a$, $T_{\mu\nu}$ satisfies the condition $\displaystyle\det[T_{\mu\nu}]=0$ (25) or equivalently $\displaystyle T_{\mu\nu}T^{\mu\nu}-T^{2}=0,$ (26) and hence $\displaystyle\det\left[K_{\mu\nu}-Kg_{\mu\nu}\right]=0,$ (27) respectively $\displaystyle(K_{\mu\nu}-Kg_{\mu\nu})(K^{\mu\nu}-Kg^{\mu\nu})-\operatorname{Tr}[K_{\mu\nu}-Kg_{\mu\nu}]^{2}=K_{\mu\nu}K^{\mu\nu}-K^{2}=0$ (28) for our semicircular embeddings (12), even though they were not derived from an AdS/BCFT ansatz in this paper. We will give a direct derivation of equation (28) as a flow equation for our complexity proposal in the following section. ## 3 Bulk action and $T\bar{T}$ We have considered the on-shell action of a cutout region of Poincaré AdS3, and interpreted it as a complexity functional of states in $T\bar{T}$-deformed holographic CFTs. The relation (4) between the coefficient of the $T\bar{T}$ deformation and the radial location has been derived for constant radial cutoff McGough:2016lol ; Taylor:2018xcy ; Hartman:2018tkw , but not for time- dependent $\rho(t)$. In this section we consider the flow equations which describe movement of the cutoff surface in a fixed background. By integrating these flow equations we should be able to derive a more precise relation between the coefficient of the $T\bar{T}$ deformation and the location of the bulk surface. In addition, these flow equations will tell us how complexity changes as we change the surface locations, and for which surfaces complexity is optimized while keeping the initial and final state fixed. ### 3.1 Excluding counterterms The relevant flow equation can most easily be derived using the ADM formalism ADM:1962 . We will keep the number of spacetime dimensions free in what follows, and write the metric as $ds^{2}=N^{2}dr^{2}+g_{\mu\nu}(x,r)(dx^{\mu}+N^{\mu}dr)(dx^{\nu}+N^{\nu}dr).$ (29) This contains the usual lapse and shift functions, for which one can locally choose a convenient gauge $N=1$ and $N^{\mu}=0$. Following ADM and choosing units so that $\kappa=1$, we now write the Lagrangian in terms of canonical variables ${\cal L}=\sqrt{g}\left(\pi^{\mu\nu}\partial_{r}{g}_{\mu\nu}-NH-N^{\mu}H_{\mu}\right),$ (30) where the lapse and shift functions appear as Lagrange multipliers enforcing the Hamiltonian and momentum constraints $H=H^{\mu}=0.$ (31) The canonical momenta are given by Brown.York:1993 $\begin{split}\pi_{\mu\nu}&=\frac{1}{\sqrt{g}}\frac{\partial S}{\partial g^{\mu\nu}}\\\ &=-(K_{\mu\nu}-Kg_{\mu\nu})\\\ &=-\frac{1}{2}\left(\partial_{r}g_{\mu\nu}-g_{\mu\nu}g^{\rho\sigma}\partial_{r}g_{\rho\sigma}\right),\end{split}$ (32) where in the second step we used the fact that metric variations are given by the Brown-York tensor, and in the last step we used the explicit form of the extrinsic curvature for the metric (29) in the gauge $N=1$, $N^{\mu}=0$. Of course, the same result can also be obtained by explicitly rewriting the action as in (30). Using (32), we find for the radial derivative $\partial_{r}g_{\mu\nu}=-2\pi_{\mu\nu}+\frac{2}{d-2}g_{\mu\nu}\pi^{\rho}_{\rho}$ (33) where $d$ is the total number of bulk spacetime dimensions (in this paper we are predominantly interested in $d=3$). The Hamiltonian constraint can be computed from (30) and, for unit AdS radius, one finds $\begin{split}H&=R^{(d-1)}-2\Lambda-(K^{2}-K^{\mu\nu}K_{\mu\nu})\\\ &=R^{(d-1)}+(d-1)(d-2)+\pi^{\mu\nu}\pi_{\mu\nu}-\frac{1}{d-2}(\pi^{\rho}_{\rho})^{2}.\end{split}$ (34) It is fairly straightforward to include matter fields in the discussion; the Hamiltonian constraint will then also contain the Hamiltonian of the matter sector, but we will for simplicity restrict to the purely gravitational case. To describe the flow we imagine starting with a surface at constant $r$ and moving the cutoff slightly so that $r\rightarrow r+\epsilon(x)$. For any surface, we can always locally find coordinates such that the surface is located at fixed value of $r$ and the metric is in the ADM gauge, so there is no loss of generality in this assumption. Then $\begin{split}\delta_{\epsilon}S&=\int\epsilon(x)\partial_{r}g^{\mu\nu}\frac{\partial{S}}{\partial g^{\mu\nu}}\\\ &=\int\sqrt{g}\epsilon(x)\partial_{r}g^{\mu\nu}\pi_{\mu\nu}\\\ &=2\int\sqrt{g}\epsilon(x)\left({\pi}^{\mu\nu}{\pi}_{\mu\nu}-\frac{1}{d-2}({\pi}^{\rho}_{\rho})^{2}\right),\end{split}$ (35) where we used equation (33) for the radial dependence of the metric in terms of momenta. Interestingly, this is precisely of $T\bar{T}$ form, but with $T$ and $\bar{T}$ defined with respect to the metric variations of the finite surface, not the boundary at infinity. A more coordinate independent way of stating the result is that as we move a surface in a given AdS background, we turn on a local $T\bar{T}$-deformation with a coefficient given by the orthogonal distance between the original and deformed surface. If we could relate the local $T$ and $\bar{T}$ on a given surface to the $T$ and $\bar{T}$ as defined at infinity, we could integrate these flow equations and write the final result in terms of a finite $T\bar{T}$ deformation of the theory at infinity. We leave a further exploration of this interesting question to future work but thinking of finite $T\bar{T}$ deformations in terms of a change in the boundary conditions for the metric we expect it to involve the linearized Einstein equations around the background Guica:2019nzm . Clearly, using (32) for $d=3$ the variation of the action vanishes if equation (28) is satisfied. As is clear from the Hamiltonian constraint, this condition can also be phrased as $R^{(d-1)}+(d-1)(d-2)=0$, i.e. the boundary surface has constant scalar curvature. Therefore, to optimize the complexity of the process we should use constant scalar curvature surfaces; the metric on a Euclidean AdSd-1 manifold precisely has the required scalar curvature. This is consistent with the observation that complexity is minimized if we take $t_{i}=t_{f}$ and consider a purely radial surface at the $t=t_{f}$ constant timeslice in section 2. ### 3.2 Including counterterms So far the discussion has used the standard bulk AdS action without the inclusion of additional counterterms, which would render the on-shell value of the action finite as one takes the surface to the asymptotic boundary. As alluded to in the beginning, in the original appearance of Liouville theory as defining path integral complexity, the absence of the volume counterterm was important. Here we briefly discuss what happens if we add a volume term for the boundary surface with an arbitrary coefficient. In our discussion of the on-shell value of the action, it would add an extra term $S_{c.t}=-2\lambda\int d^{2}x\sqrt{g}=-2\lambda\int dtdx\frac{\sqrt{1+\dot{\rho}^{2}}}{\rho^{2}}.$ (36) Adding the counterterm modifies the field equations to $\displaystyle\frac{1}{\rho^{3}(1+\dot{\rho}^{2})}\left((\rho\ddot{\rho}+1+\rho^{2})-\lambda\sqrt{1+\dot{\rho}^{2}}\left(\frac{1}{2}\rho\ddot{\rho}+1+\dot{\rho}^{2}\right)\right)=0$ (37) We can also reconsider the flow equations in the presence of the volume counterterm. Denoting the volume counterterm as $S_{\rm vol}=-2\lambda(d-2)\int_{\partial M}\sqrt{g}$ (38) so that $\lambda=1$ is precisely the counterterm which would cancel the volume divergence near the AdS boundary, we now introduce $\tilde{\pi}_{\mu\nu}=\pi_{\mu\nu}-\lambda(d-2)g_{\mu\nu}$ so that these are precisely the canonical momenta in the presence of the extra boundary volume term. The Hamiltonian constraint can be rewritten as $\begin{split}H&=R^{(d-1)}+(1-\lambda^{2})(d-1)(d-2)+\tilde{\pi}^{\mu\nu}\tilde{\pi}_{\mu\nu}-\frac{1}{d-2}(\tilde{\pi}^{\rho}_{\rho})^{2}-2\lambda\tilde{\pi}^{\rho}_{\rho}=0\end{split}$ (39) We can now consider two types of flows. We can consider the variation of the action as we change the radial surface in a given background, but we can also consider the variation of the action as we perform a conformal rescaling of the metric on the radial surface. In $d=3$, the latter does not require an adjustment of the bulk geometry, but in higher dimensions this is no longer true. It is therefore not clear whether conformal rescalings of the induced metric on the boundary surface are in general compatible with keeping the initial and final states fixed in $d>3$. Regardless, the change of the action under the first type of flow now reads $\begin{split}\delta_{\epsilon}S&=\int\epsilon(x)\partial_{r}g^{\mu\nu}\frac{\partial{\tilde{S}}}{\partial g^{\mu\nu}}\\\ &=\int\sqrt{g}\epsilon(x)\partial_{r}g^{\mu\nu}\tilde{\pi}_{\mu\nu}\\\ &=2\int\sqrt{g}\epsilon(x)\left({\tilde{\pi}}^{\mu\nu}{\tilde{\pi}}_{\mu\nu}-\frac{1}{d-2}({\tilde{\pi}}^{\rho}_{\rho})^{2}-\lambda{\tilde{\pi}}^{\rho}_{\rho}\right)\end{split}$ (40) and for the second type of flow with $\delta g^{\mu\nu}=\epsilon(x)g^{\mu\nu}$ $\begin{split}\delta_{\epsilon}\tilde{S}&=\int\epsilon(x)g^{\mu\nu}\frac{\partial\tilde{S}}{\partial g^{\mu\nu}}\\\ &=\int\sqrt{g}\epsilon(x)\tilde{\pi}^{\rho}_{\rho}\\\ &=\frac{1}{2\lambda}\int\sqrt{g}\epsilon(x)\left(R^{(d-1)}+(1-\lambda^{2})(d-1)(d-2)+\tilde{\pi}^{\mu\nu}\tilde{\pi}_{\mu\nu}-\frac{1}{d-2}(\tilde{\pi}^{\rho}_{\rho})^{2}\right).\end{split}$ (41) We see that both flows take the form of $T\bar{T}$ deformations, with various extra terms such as the scalar curvature and the trace of the stress tensor. Just as in the case without counterterm ($\lambda=0$) it would be interesting to integrate these flows to finite flows starting at the AdS boundary. The first flow is extremized when the surface obeys $R^{(d-1)}+(d-1)(d-2)-\lambda(d-2)K=0$ (42) which still holds for an AdSd-1 equal time slice in AdSd. As expected, for our setup (42) is equivalent to (37). The second flow, on the other hand, is extremized when $K=\lambda(d-1)$. This does not have an extremum for an AdSd-1 equal time slice in AdSd unless $\lambda=0$. Moreover, as we indicated above, it is not clear whether the initial state and final state are kept fixed along the flow, and therefore the precise interpretation of this flow is somewhat unclear. In any case, it would be interesting to explore whether surfaces obeying (42) or $K=\lambda(d-1)$ have the potential to define a new notion of complexity. Finally, we notice that it is also possible to add higher order counterterms, but for those the connection to $T\bar{T}$ deformations becomes more complicated. ## 4 Towards counting elementary operations ### 4.1 Gravitational action from counting stress tensor insertions The bulk computation from section 2 and illustrated in figure 1 can be viewed in the light of the results from the previous section as the following non- unitary circuit acting on the initial state $|0\rangle_{z_{f}}=P\exp\left[-\int_{t_{i}}^{t_{f}}dt\,(H_{\rho(t)}+\dot{\rho}\,[T\bar{T}]_{\rho(t)})\right]|0\rangle_{z_{i}}.$ (43) In the above expression, $H_{\rho(t)}$ represents Euclidean time evolution in a CFT with cutoff specified by $\rho(t)$, which in the bulk would correspond to moving in the $t$-direction while keeping $\rho(t)$ fixed. The other term represents the operator that implements a change in the scale of the theory, which we have schematically denoted by $[T\bar{T}]_{\rho(t)}$. In the bulk this would correspond to changing $\rho$ while keeping $t$ fixed. What is important is that each layer of (43) in general uses operators from a different theory. We assume that $H_{\rho(t)}$ and $[T\bar{T}]_{\rho(t)}$ can, at least in principle, be written and understood as operators in the undeformed field theory, say by explicitly solving the Lagrangian $T\bar{T}$ flow equation, and that the action of those operators on states in the undeformed theory is well-defined. Following gate counting ideas of Nielsen1133 ; Jefferson:2017sdb ; Chapman:2017rqy ; Camargo:2019isp , due to spatial homogeneity of our setup one might be tempted to regard $H_{\rho(t)}$ and $[T\bar{T}]_{\rho(t)}$ as two classes of elementary operations with $\rho(t)$ playing two independent roles. The first role played by $\rho(t)$ lies in labelling the elementary operations we are using (as already mentioned, both $H_{\rho(t)}$ and $[T\bar{T}]_{\rho(t)}$ are different operators for each value of $\rho(t)$). The second role stems from $dt\,\dot{\rho}$ being related to the number of times the operator $[T\bar{T}]_{\rho(t)}$ is applied in a given layer of the circuit. Correspondingly, $H_{\rho(t)}$ is applied simply $dt\,1$ number of times. As a result, a naïve way of counting insertions of $H(\rho_{t})$ would be $\int_{t_{i}}^{t_{f}}dt\,1$ and $\int_{t_{i}}^{t_{f}}dt\,|\dot{\rho}|$ when it comes to $[T\bar{T}]_{\rho(t)}$ with the total number of insertions being the sum of the two contributions. Note that since $\rho(t)$ is just a label, in principle the contribution from every layer can be weighted by some non- negative function of $\rho(t)$ – a penalty factor that weights hardness of applications of particular transformations. The above logic was based on an $L_{1}$ norm of the vector $\\{1,\dot{\rho}\\}$, but in principle any norm would do. However, our proposal is to view the action (8) or (10) as a cost function for the circuit (43). It seems quite straightforward to associate a suitable weight to $H_{\rho(t)}$. If we imagine a CFT with a fixed cutoff or lattice spacing $\sim\rho$, and we count the number of lattice points in a given Euclidean volume (which we interpret as suitable tensor operations), then we immediately obtain an answer proportional to $\int dtdx\rho^{-2}=\int dtV_{x}\rho^{-2}$, which is indeed proportional to the potential term in the action (10). Within the logic outlined in the previous paragraph, this corresponds to using $L_{1}$ norm with the penalty factor equal to $V_{x}\rho^{-2}$. The second term in the action (10) is tricky to interpret within the framework of Nielsen1133 ; Jefferson:2017sdb ; Chapman:2017rqy . Following the above logic, one would be naturally inclined to associate this term with the presence of $[T\bar{T}]_{\rho(t)}$ insertions in the circuit (43), however, this is difficult. Writing the relevant contribution as $\frac{V_{x}\,|\arctan{\dot{\rho}}|}{\rho^{2}}\,|\dot{\rho}|$ one does not recognize a standard penalty factor in front of $|\dot{\rho}|$ within an $L_{1}$ norm. To this end, the penalty factor is not supposed to know about what circuits do at other layers, and its dependence on $\dot{\rho}$ via $\arctan{\dot{\rho}}$ induces such a dependence. This is very much reminiscent of the discussion in Camargo:2019isp about viewing the Liouville action as a bona fide cost function. Following this thread, our interpretation of the “potential” and “kinetic” terms in the gravity action (10) is quite similar to an earlier discussion about the aforementioned qualitative interpretation of the Liouville action as a complexity of a tensor network Czech:2017ryf ; Caputa:2017yrh . In our study, the “potential” term studies euclideons, whereas the “kinetic” term is associated with changes in cut-off and might be related to isometries or full layers of MERA. Note also that the circuit (43) is very similar to the expression usually written down for cMERA Haegeman:2011uy , which takes the form of a path- ordered exponent of infinitesimal unitaries and dilatation operators. An immediate issue with this expression is the precise meaning of the operator $[T\bar{T}]_{\rho(t)}$ in the CFT, as we have seen that a careful construction of this operator requires one to integrate the flow equations from the AdS boundary to the bulk surface. If we knew the precise definition of this operator in the CFT, we could try to assign a number to this path-ordered exponent, for example by computing the length of the trajectory in a suitable space of operators. It is not inconceivable that such a computation is possible, as we know the commutation relations between the Hamiltonian and the $T\bar{T}$ operator in the CFT, and it would be interesting to explore this further. Furthermore, it is intriguing to note that $\arctan\dot{\rho}$ is the angle that the surface makes in the $z,t$-plane, so it looks like this term is measuring the amount of effort it takes to rotate the surface in the $z,t$-plane. It would be very interesting to understand this observation better. Since our interpretation of the gravity action in terms of gate counting is more on the qualitative side, in the following we want to propose another way of arriving at (8). ### 4.2 Relation to kinematic space In the above, we have often tacitly assumed that the information about the bulk surface $z=\rho(t)$ is encoded locally in the boundary theory. However, as our discussion of flows shows, it is highly questionable whether this is a reasonable assumption. A better way to encode the information of the surface $z=\rho(t)$ in the boundary theory is through pairs of points $(t_{1}(t),t_{2}(t))$ (with $x$=0) on the boundary, such that the geodesic that starts at $t_{1}(t)$ and ends at $t_{2}(t)$ is tangent to the bulk surface at the point $(z=\rho(t),t,0)$, see figure 2. Figure 2: We can parametrize a generic bulk curve $\rho(t)$ by the pairs of boundary points $(t_{1}(t),t_{2}(t))$, such that a bulk geodesic connecting these two points is tangent to the bulk curve at $z=\rho(t)$. This way, the profile $\rho(t)$ is encoded as a path in kinematic space, the space of bulk geodesics. This construction has the benefit of being covariant, and viewing Euclidean time as another spatial coordinate, these geodesics encode precisely the entanglement wedges which touch the surface but do not cross it. In other words, they precisely encode the information about those regions of spacetime we try to omit in our bulk path integral construction. One can ask whether there is a natural geometry associated to the pairs of points of this type, and the answer is yes. Conformal invariance produces a natural metric on the space of pairs of points, also known as kinematic space Czech:2015qta . For the case at hand it is given up to an undetermined constant prefactor by the 2d de Sitter metric $ds_{ks}^{2}=\frac{-dt_{1}\,dt_{2}}{(t_{1}-t_{2})^{2}}.$ (44) In the spirit of defining complexity by assigning a metric to a group of transformations Nielsen1133 ; Jefferson:2017sdb ; Chapman:2017rqy , we can now ask what the length of the path in this geometry associated with $\rho(t)$ is. To compute it explicitly, we need the explicit form of $t_{1}(t)$ and $t_{2}(t)$. These are given by $t_{1,2}(t)=t+\rho\dot{\rho}\pm\rho\sqrt{\dot{\rho}^{2}+1}.$ (45) Consider now the action $S_{ks}\sim\int\frac{dx}{\rho}ds_{ks}(t),$ (46) where we included the coordinate $x$ in units of the cutoff $\rho$, and the distance $ds$ obtained from (44) upon inserting (45). This results in $S_{ks}\sim\int dtdx\left|\frac{\rho\ddot{\rho}+(1+\dot{\rho}^{2})}{\rho^{2}(1+\dot{\rho}^{2})}\right|,$ (47) which agrees precisely with the bulk action in the form (8) as long as $\ddot{\rho}\geq-\rho^{-1}(1+\dot{\rho}^{2})$. This is related to the fact that the kinematic space is a Lorentzian manifold and the condition in question is the one that one moves there along a timelike path. This strongly suggests that the relevant circuit geometry for these types of finite bulk surface computations is a version of kinematic space444Note that this analysis did not include corner terms contributions to the action. However, in the case of open bulk curves, in the kinematic space framework there are additional contributions that we have not included. It would be certainly interesting to see if they reproduce the corner terms. We would like to thank Bartek Czech for bringing this up.. Note that on-shell, (47) vanishes exactly for the semi-circular arcs that solve (11), as they are also geodesics in AdS-space. In other words, for these solutions the path traversed in kinematic space shrinks to a point. We will come back to this point in section 5. Note that alternatively one may use the standard kinematic space prescription built around entanglement entropy of intervals on constant $t$ time slices. The metric (44) is the same but now with $t_{1}$ and $t_{2}$ replaced simply by $x_{1}$ and $x_{2}$ with $x_{1,2}(t)=\pm\rho(t).$ (48) Using again (46) gives this time $S_{ks^{\prime}}\sim\int dtdx\frac{|\dot{\rho}|}{\rho^{2}}.$ (49) This is clearly a different expression than (47), which however bears a striking similarity with the gate counting approach of Nielsen1133 ; Jefferson:2017sdb ; Chapman:2017rqy when the latter uses a Manhattan norm. Let us mention that generalizing the kinematic space consideration leading to (47) to more complicated geometries is not obvious as minimal geodesics do not necessarily penetrate the whole spacetime. In the case of geodesics computing the entanglement entropy, these are entanglement shadows Balasubramanian:2014sra and they appear, for example, in the case of double- sided black holes. Finally, let us mention that the relation between kinematic space and complexity was explored earlier in two different instances in Abt:2017pmf and Chen:2020nlj , however, these proposals are distinct from ours and use a standard entanglement-based kinematic space. ## 5 Outlook In this paper we have discussed the idea that finite spacetime regions correspond to quantum circuits with a complexity given by the on-shell value of the gravitational action. We found several intriguing results, but much more work remains to be done to put our results on a firmer footing. Perhaps the most pressing of these is to find a more precise circuit interpretation along the lines we discussed in the previous section. Some other obvious aspects to explore are the impact of counterterms, higher derivative terms and matter fields on the computations. We list some further open issues and ideas for future work below. ### Relation with tensor network renormalization As we described in the introduction, one of the long-standing questions in quantum information aspects of holography is understanding the relation between holographic geometries and MERA tensor network. With the advent of McGough:2016lol , it is natural to expect that the tensor network description should include $T\bar{T}$ deformations in some way. It would be certainly very interesting to explore to what extent this is the case in the existing formulation of MERA and to what extent this calls for an alternative approach, see also Kruthoff:2020hsi . In particular, the discussion of geometric interpretation of MERA in Milsted:2018san interprets latticized CFT on a hyperbolic disc geometry as intertwined layers of MERA and Euclideons (Euclidean time evolution). On the contrary, in the present paper, the hyperbolic disk geometry of the maximum volume bulk time slice arises from the absence of Euclidean time evolution in the circuit defined by (43). ### Global AdS and trivial initial state It is straightforward to repeat our computations in global AdS, as opposed to the Poincaré patch of AdS. There are no major conceptual changes, except that we can now choose a smooth surface without the need to pick an initial state. Stated differently, we have chosen a trivial initial state in the CFT with infinite cutoff, or equivalently, we have a no-boundary type construction of the state at later times. The optimization proceeds exactly as in Poincaré coordinates, and complexity is optimized if the spacetime region collapses onto an equal time disc, with complexity proportional to the volume of the disc. ### Choice of time slice In our proposal we chose to bound our bulk spacetime subregion by constant Poincaré time slices. One could ask whether we could have made a different choice of boundaries and still have gotten an action that could be reasonably interepreted as circuit complexity. In the first place, our choice satisfies the physical requirement that the complexity of the circuits which do nothing (no evolution in Euclidean time, or change in cutoff) should be zero. This would not have be the case had we bounded our region by say constant $z$ surfaces rather than constant $t$. We also could have considered a wiggly spacelike boundary, which along with our timelike boundary would be dynamically determined by extremisation of the gravitational action. However, for Poincaré AdS we showed in section 3.1 that constant $t$ slices are just such an extremum of the action, so one can consider them to have been dynamically determined from the perspective of this modified proposal. It is not clear whether this modification would always give sensible results in general asymptotically AdS spacetimes, and this is an interesting direction to consider for future work. For stationary spacetimes, it seems reasonable to take fixed time slices, but it is not clear what to do for more general spacetimes. States in gravity are not associated to a unique time slice. In some sense, states are associated to complete causal diamonds in the Lorentzian signature. There is therefore no canonical choice of initial and final time slices which bound the spacetime region. Our proposal is to use slices with vanishing extrinsic curvature $K$, as these are covariantly defined, and lead to a vanishing contribution of the Gibbons-Hawking boundary term. This choice will give rise to corner contributions, but those seem unavoidable for any choice, and as we saw in the case where the spacetime region collapses to a disc, they are a feature rather than a bug. One could, alternatively, try to extend the spacetime region indefinitely into the past or future, and subtract the contributions of these semi-infinite pieces later, but this procedure has exactly the same ambiguity in it. It would be interesting to have a better understanding of the various choices one can make for the future and past boundaries and what the implications of these choices are. It might for example also be natural to take time slices with constant scalar curvature as complexity is locally extremized for that choice of time slice. ### A finite deformation of Liouville The effective action for a finite bounded region in AdS is of independent interest, as it computes the partition function for the CFT with a cutoff and particular curved manifolds. In the limit where the bounded region approaches the boundary of AdS, we recover the CFT partition function (including divergent terms), which in 2d is given by the Polyakov action, and in conformal gauge becomes the Liouville action. It is interesting to see that (20) is apparently a finitely deformed version of Liouville theory for a space-independent Liouville field $\varrho(u)=\exp(-\phi(u))$. If we insert this and take $\phi(u)\rightarrow\infty$, we indeed recover Liouville theory, see also the discussion in Boruch:2020wax . One might think that (20) describes a finite $T\bar{T}$ deformation of Liouville theory and it would be interesting to make that connection precise. A possible route to address this matter is to cast the on-shell action in a form involving the scalar curvature of the cutoff surface, which seems feasible in the ADM formalism, compare with Polyakov’s non-local form of the effective action, and identify the relevant deformation. ### Other dimensions In higher dimensions, the computation is more or less the same, and we will not present the relevant details here. An exception is AdS2, where after a partial integration the action becomes proportional to $I\sim\int dt\rho^{-1}$, which suggests that the coarse graining operation has no cost associated to it. This is perhaps a consequence of the peculiar nature of the AdS2/CFT1 correspondence, where AdS2 is merely dual to the ground states of the CFT1 and is of limited relevance. It would be interesting to repeat the computation for JT gravity Teitelboim:1983ux ; Jackiw:1984je and to compare to flows in spaces of Hamiltonians, which are much easier to control than $T\bar{T}$ deformations in higher dimensions, and might lead to a more precise gate counting interpretation. In $T\bar{T}$-deformed quantum mechanics, the Hamiltonian is mapped to a function of itself Gross:2019ach ; Gross:2019uxi , $H\mapsto f(H).$ (50) Suppose we wish to quantify the complexity of the circuit created by Euclidean time evolution, $U(t)=\exp(-H\,t).$ (51) Given $U(t)$ of the undeformed theory, we in principle know the operator $U_{f}(t)$ of the deformed theory, $U_{f}(t)=\exp(-tf(-\partial_{t}\log U(t))),$ (52) but even given this simple relation, it is not clear how to relate the complexities of $U(t)$ and $U_{f}(t)$. One puzzle arises when combining complexity and holographic $T\bar{T}$. Increasing the $T\bar{T}$ deformation is dual to bringing in the cutoff surface, which reduces the volume of the maximal boundary anchored volume slice, and by the CV conjecture would say implies that the complexity of the state similarly reduces. Assuming the volume of the maximal volume bulk slice monotonically decreases as the boundary is brought in, then this implies that the complexity is monotonically decreasing too under the flow. Is there something special about the holographic $T\bar{T}$ deformation such that the complexity of geometric states monotonically decreases under its flow, or is the CV proposal incorrect at finite cutoff? ### Lorentzian geometries We could repeat our computation in Lorentzian signature, but then several new features arise. First, there is the qualitative difference of whether or not $z=\rho(t)$ describes a timelike or spacelike surface. In the timelike case the region is delimited by the lightfronts $t=\pm z$, and the on-shell action takes the form $\displaystyle I=\frac{2}{\kappa}\int_{\partial M}d^{2}x\,\frac{\rho\ddot{\rho}+(1-\dot{\rho}^{2})}{\rho^{2}(1-\dot{\rho}^{2})}$ (53) Integration by parts yields $\displaystyle I=\frac{2}{\kappa}\int_{\partial M}d^{2}x\left[\frac{1}{\rho^{2}}-\frac{1}{2}\left(\text{log}(1-\dot{\rho})-\text{log}(1+\dot{\rho})\right)\right]$ (54) It is easy to see that this expression diverges in the limit $\dot{\rho}\rightarrow\pm 1$, i.e. when the surface is tangent to the lightfronts $t=\pm z$. It is possible to properly define gravitational actions in the presence of null boundaries Lehner:2016vdi , and in order for our proposal to make sense we should modify it so that in the null limit it approaches the answer of Lehner:2016vdi . With this modification we would then be in agreement with the complexity equals action proposal. If we start with a spacelike surface and start optimizing, then there are two possibilities, we either find a constant scalar curvature surface, or we encounter the same null boundaries as in the previous timelike case. Which of the two optimizes the gravitational action depends on whether we choose $+S$ of $-S$ to optimize, and since it is $e^{iS}$ which appears in the path integral, it is not a priori clear which one of the two we should take in the absence of a precise gate counting interpretation. One would be inclined though to pick the sign such that the term proportional to $1/\rho^{2}$ and independent of $\dot{\rho}$ has a positive sign so that time evolution at fixed cutoff has positive complexity. Regardless, we seem to universally find either constant scalar curvature surfaces or null surfaces as extrema of the extremization problem. ### BTZ black hole Based on general arguments, there are several key features that measures of complexity should possess, such as aforementioned asymptotic linear growth in time in black hole backgrounds and the switchback effect Susskind:2014jwa . As a first heuristic check, we can investigate constant scalar curvature slices (with the right value for the scalar curvature) in the BTZ black hole. In Kruskal coordinates, the BTZ black hole looks like $\displaystyle ds^{2}=-\frac{4\,du\,dv}{(1+uv)^{2}}+\frac{(1-uv)^{2}}{(1+uv)^{2}}d\phi^{2}$ (55) with the asymptotic AdS boundaries located at $uv=-1$. The relevant constant curvature slices turn out to take the simple form $uv+\lambda u+\mu v-1=0$. Consider the special case $uv+(u+v)/\sinh\xi-1=0$ which intersects the boundary at $u=e^{\xi}$, $v=-e^{-\xi}$ and on the other boundary of the eternal black hole at the point obtained by interchanging $u,v$. Shifting $\xi$ is therefore like shifting time upwards on both asymptotic boundaries, and we are interested in the behavior at late $\xi$. The midpoint of the slice is at $u=v=\tanh\xi/2$, which indeed moves towards the singularity at $uv=1$ as $\xi\rightarrow\infty$. Therefore, constant scalar curvature slices do correctly probe the growing Einstein-Rosen bridge. The optimal spacetime region in this case is the region between the maximal volume slice with $K=0$ (which is where we propose to end the spacetime region, as discussed above) and the constant curvature slice. We have not computed the gravitational action associated to this region but expect it to reproduce the required late time growth. As the maximal volume slice is also explicitly computable Carmi:2017jqz , we leave this interesting exercise to future work. ### Higher curvature corrections We have proposed that the complexity of the circuit that maps between ground states in two EFTs with different finite cutoffs is given by the on-shell gravitational action. Considering the effect of higher curvature corrections on the gravitational action, and therefore complexity, would be a natural extension to this proposal. Higher curvature corrections to the holographic complexity=volume proposal were recently studied in Hernandez:2020nem . The simplest example to study is Gauss-Bonnet (GB) gravity in AdS5. The GB correction $\mathcal{L}^{GB}=R^{2}-4R_{\mu\nu}R^{\mu\nu}+R_{\mu\nu\rho\sigma}R^{\mu\nu\rho\sigma}$ (56) is a constant in the vacuum AdS geometry we have studied in this work, so simply rescales the contribution from the volume of $M$ to the on-shell action. In general, however, we expect non-trivial contributions from the correction to the boundary Lagrangian, and from the bulk Lagrangian $\mathcal{L}^{GB}$ for perturbed geometries. Seeing whether the resulting on- shell action could be interpreted as the complexity of a circuit in $T\bar{T}$-deformed CFT4 with $a\neq c$ would be interesting. To our knowledge the generalization of holographic $T\bar{T}$ to higher curvature gravity has not been studied. There is general method to determine the deformation to a holographic CFT needed to put the dual gravity theory at finite cutoff Taylor:2018xcy ; Hartman:2018tkw . To start we note that in a theory with only one dimensionful parameter $\mu$ (assuming it to have no spacetime dependence) such as a $T\bar{T}$ deformed CFT, the effective action changes under infinitesimal length rescaling as $\Delta_{\mu}\mu\partial_{\mu}W=\int d^{d}x\sqrt{\gamma}\langle\operatorname{Tr}T\rangle$ (57) where $\Delta_{\mu}$ is the scaling dimension of $\mu$. The flow of the boundary action under changing $\mu$ is thus determined by the trace of the boundary stress tensor, which is related to the bulk Brown-York stress tensor through $T_{ij}=r_{c}^{d-2}\tilde{T}_{ij}$. The essence of the method is to use the Hamiltonian constraint to eliminate extrinsic curvature terms in the Brown-York stress tensor appearing in (57). Following this procedure for pure Einstein gravity gives the flow equation $\frac{\partial W}{\partial\mu}=\int d^{d}x\sqrt{\gamma}\left(T^{ij}T_{ij}-\frac{1}{d-1}(T^{i}_{i})^{2}\right)$ (58) This is the field theory deformation flow equation for Einstein gravity at finite cutoff. When we add higher curvature corrections, the method of substituting out extrinsic curvature terms in the Brown-York stress tensor for powers of the stress tensor using the Hamiltonian constraint does not change, but the Brown-York stress tensor and the radial Hamiltonian constraint do Davis:2002gn , and the final result will differ from (58). ### Relation to entanglement wedge reconstruction Finally, it is interesting to ask whether it is a mathematical coincidence or not that the solutions (12) to (11) are semicircular arcs, just like geodesics in the Poincaré AdS-background, even though in our setup they describe the embeddings of co-dimension one surfaces, not entanglement entropy. In section 4.2, we addressed this question from a kinematic space perspective. There, we showed that a generic bulk profile $\rho(t)$ can as well be described as a path in kinematic space (neglecting a possible $x$-dependence as throughout the paper), but equations (11) and (12) enforce this path to shrink down to a length zero point for fixed boundary conditions $t_{i},t_{f},z_{i},z_{f}$. Assuming that our results can be generalised to the Lorentzian case, it will be interesting to investigate whether these semicircular embeddings are a consequence of entanglement wedge reconstruction or entanglement wedge nesting Wall:2012uf ; Czech:2012bh ; Akers:2016ugt . For example, one might imagine that the optimization of the path integral (subject to the boundary conditions $t_{i},t_{f},z_{i},z_{f}$) drives the bulk surface as deep into the bulk as possible until it leaves the entanglement wedge and cannot move further. In the fully Lorentzian case, it would be interesting to determine whether this only reproduces the extremal surface at the edge of the entanglement wedge, or also its null boundaries. At least extremal area surfaces are expected to play a role of special importance in quantum gravity for quite generic reasons Camps:2019opl . As said in section 2.2, such semicircular boundaries have also been found in Erdmenger:2014xya as solutions of an AdS/BCFT toy-model with non-trivial matter content living in the worldvolume of the end of the world brane. In that paper, it was shown as a consequence of physical energy conditions on the worldvolume matter fields that the corresponding branes generally have to be extremal surface barriers in the sense of Engelhardt:2013tra , and the branes with semicircular embedding profile where precisely the ones staying as close to the boundary as possible without violating this condition. While all these different observations seem to point to a nontrivial quantum information theoretic reason for the embeddings (12) being the correct ones, we leave further investigation of this to future work. Besides extending our work to the Lorentzian case, investigating similar setups in higher dimensions or on nontrivial backgrounds such as BTZ may yield further insight. ###### Acknowledgements. We would like to thank Shira Chapman and Ignacio Reyes for being involved in the initial part of this collaboration and Bartek Czech for useful discussions. The Gravity, Quantum Fields and Information (GQFI) group at AEI is supported by the Alexander von Humboldt Foundation and the Federal Ministry for Education and Research through the Sofja Kovalevskaja Award. AR is supported by the Stichting Nederlandse Wetenschappelijk Onderzoek Instituten (NWO-I). JdB is supported by the European Research Council under the European Unions Seventh Framework Programme (FP7/2007-2013), ERC Grant agreement ADG 834878\. The work of MF was supported by the Polish National Science Centre (NCN) grant 2017/24/C/ST2/00469 until November 16th 2020 and through the grants SEV-2016-0597 and PGC2018-095976-B-C21 from MCIU/AEI/FEDER, UE since December 1st 2020. RC acknowledges support from the Netherlands Organisation for Scientific Research (NWO-I). SH holds a fellowship from the Ramon Areces Foundation (Spain). ## References * (1) S. Ryu and T. Takayanagi, Holographic derivation of entanglement entropy from AdS/CFT, Phys. Rev. Lett. 96 (2006) 181602, [hep-th/0603001]. * (2) J. M. Maldacena, The Large N limit of superconformal field theories and supergravity, Int.J.Theor.Phys. 38 1113-1133 and also Adv.Theor.Math.Phys. 2 231-252 (1999 and 1998) [hep-th/9711200]. * (3) L. Susskind, Entanglement is not enough, Fortsch. Phys. 64 (2016) 49–71, [arXiv:1411.0690]. * (4) R. Orus, A Practical Introduction to Tensor Networks: Matrix Product States and Projected Entangled Pair States, Annals Phys. 349 (2014) 117–158, [arXiv:1306.2164]. * (5) B. Swingle, Entanglement Renormalization and Holography, Phys. Rev. D 86 (2012) 065007, [arXiv:0905.1317]. * (6) B. Swingle, Constructing holographic spacetimes using entanglement renormalization, arXiv:1209.3304. * (7) G. Vidal, Class of Quantum Many-Body States That Can Be Efficiently Simulated, Phys. Rev. Lett. 101 (2008) 110501, [quant-ph/0610099]. * (8) D. Stanford and L. Susskind, Complexity and Shock Wave Geometries, Phys. Rev. D 90 (2014), no. 12 126007, [arXiv:1406.2678]. * (9) A. R. Brown, D. A. Roberts, L. Susskind, B. Swingle, and Y. Zhao, Holographic Complexity Equals Bulk Action?, Phys. Rev. Lett. 116 (2016), no. 19 191301, [arXiv:1509.07876]. * (10) A. R. Brown, D. A. Roberts, L. Susskind, B. Swingle, and Y. Zhao, Complexity, action, and black holes, Phys. Rev. D 93 (2016), no. 8 086006, [arXiv:1512.04993]. * (11) J. Couch, W. Fischler, and P. H. Nguyen, Noether charge, black hole volume, and complexity, JHEP 03 (2017) 119, [arXiv:1610.02038]. * (12) J. Haegeman, T. J. Osborne, H. Verschelde, and F. Verstraete, Entanglement Renormalization for Quantum Fields in Real Space, Phys. Rev. Lett. 110 (2013), no. 10 100402, [arXiv:1102.5524]. * (13) R. Jefferson and R. C. Myers, Circuit complexity in quantum field theory, JHEP 10 (2017) 107, [arXiv:1707.08570]. * (14) S. Chapman, M. P. Heller, H. Marrochio, and F. Pastawski, Toward a Definition of Complexity for Quantum Field Theory States, Phys. Rev. Lett. 120 (2018), no. 12 121602, [arXiv:1707.08582]. * (15) M. A. Nielsen, M. R. Dowling, M. Gu, and A. C. Doherty, Quantum computation as geometry, Science 311 (2006), no. 5764 1133–1135, [quant-ph/0603161]. * (16) L. Susskind and E. Witten, The Holographic bound in anti-de Sitter space, hep-th/9805114. * (17) L. McGough, M. Mezei, and H. Verlinde, Moving the CFT into the bulk with $T\overline{T}$, JHEP 04 (2018) 010, [arXiv:1611.03470]. * (18) G. Jafari, A. Naseh, and H. Zolfi, Path Integral Optimization for $T\bar{T}$ Deformation, Phys. Rev. D 101 (2020), no. 2 026007, [arXiv:1909.02357]. * (19) H. Geng, $T\bar{T}$ Deformation and the Complexity=Volume Conjecture, Fortsch. Phys. 68 (2020), no. 7 2000036, [arXiv:1910.08082]. * (20) B. Chen, L. Chen, and C.-Y. Zhang, Surface/state correspondence and $T\overline{T}$ deformation, Phys. Rev. D 101 (2020), no. 10 106011, [arXiv:1907.12110]. * (21) S. Chakraborty, G. Katoch, and S. R. Roy, Holographic Complexity of LST and Single Trace $T\bar{T}$, arXiv:2012.11644. * (22) P. Caputa, N. Kundu, M. Miyaji, T. Takayanagi, and K. Watanabe, Anti-de Sitter Space from Optimization of Path Integrals in Conformal Field Theories, Phys. Rev. Lett. 119 (2017), no. 7 071602, [arXiv:1703.00456]. * (23) P. Caputa, N. Kundu, M. Miyaji, T. Takayanagi, and K. Watanabe, Liouville Action as Path-Integral Complexity: From Continuous Tensor Networks to AdS/CFT, JHEP 11 (2017) 097, [arXiv:1706.07056]. * (24) B. Czech, Einstein Equations from Varying Complexity, Phys. Rev. Lett. 120 (2018), no. 3 031601, [arXiv:1706.00965]. * (25) T. Takayanagi, Holographic Spacetimes as Quantum Circuits of Path-Integrations, JHEP 12 (2018) 048, [arXiv:1808.09072]. * (26) H. A. Camargo, M. P. Heller, R. Jefferson, and J. Knaute, Path integral optimization as circuit complexity, Phys. Rev. Lett. 123 (2019), no. 1 011601, [arXiv:1904.02713]. * (27) A. M. Polyakov, Quantum Geometry of Bosonic Strings, Phys. Lett. B 103 (1981) 207–210. * (28) G. Evenbly and G. Vidal, Tensor network renormalization yields the multiscale entanglement renormalization ansatz, Physical Review Letters 115 (Nov, 2015) [arXiv:1502.05385]. * (29) A. Milsted and G. Vidal, Tensor networks as path integral geometry, arXiv:1807.02501. * (30) A. Milsted and G. Vidal, Geometric interpretation of the multi-scale entanglement renormalization ansatz, arXiv:1812.00529. * (31) J. Kruthoff and O. Parrikar, On the flow of states under $T\overline{T}$, arXiv:2006.03054. * (32) B. Czech, L. Lamprou, S. McCandlish, and J. Sully, Tensor Networks from Kinematic Space, JHEP 07 (2016) 100, [arXiv:1512.01548]. * (33) C. Beny, Causal structure of the entanglement renormalization ansatz, New J. Phys. 15 (2013) 023020, [arXiv:1110.4872]. * (34) B. Czech, L. Lamprou, S. McCandlish, and J. Sully, Integral Geometry and Holography, JHEP 10 (2015) 175, [arXiv:1505.05515]. * (35) B. Czech, L. Lamprou, S. McCandlish, B. Mosk, and J. Sully, A Stereoscopic Look into the Bulk, JHEP 07 (2016) 129, [arXiv:1604.03110]. * (36) J. de Boer, F. M. Haehl, M. P. Heller, and R. C. Myers, Entanglement, holography and causal diamonds, JHEP 08 (2016) 162, [arXiv:1606.03307]. * (37) N. Bao, G. Penington, J. Sorce, and A. C. Wall, Beyond Toy Models: Distilling Tensor Networks in Full AdS/CFT, JHEP 11 (2019) 069, [arXiv:1812.01171]. * (38) N. Bao, G. Penington, J. Sorce, and A. C. Wall, Holographic Tensor Networks in Full AdS/CFT, arXiv:1902.10157. * (39) J. Hartle and S. Hawking, Wave Function of the Universe, Adv. Ser. Astrophys. Cosmol. 3 (1987) 174–189. * (40) M. Nozaki, S. Ryu, and T. Takayanagi, Holographic Geometry of Entanglement Renormalization in Quantum Field Theories, JHEP 10 (2012) 193, [arXiv:1208.3469]. * (41) M. Miyaji, T. Numasawa, N. Shiba, T. Takayanagi, and K. Watanabe, Continuous Multiscale Entanglement Renormalization Ansatz as Holographic Surface-State Correspondence, Phys. Rev. Lett. 115 (2015), no. 17 171602, [arXiv:1506.01353]. * (42) A. Belin, A. Lewkowycz, and G. Sárosi, Complexity and the bulk volume, a new York time story, JHEP 03 (2019) 044, [arXiv:1811.03097]. * (43) A. Belin, A. Lewkowycz, and G. Sarosi, Gravitational path integral from the $T^{2}$ deformation, JHEP 09 (2020) 156, [arXiv:2006.01835]. * (44) J. Boruch, P. Caputa, and T. Takayanagi, Path-Integral Optimization from Hartle-Hawking Wave Function, arXiv:2011.08188. * (45) P. Caputa, J. Kruthoff, and O. Parrikar, Building Tensor Networks for Holographic States, arXiv:2012.05247. * (46) F. Smirnov and A. Zamolodchikov, On space of integrable quantum field theories, Nucl. Phys. B 915 (2017) 363–383, [arXiv:1608.05499]. * (47) A. Cavaglià, S. Negro, I. M. Szécsényi, and R. Tateo, $T\bar{T}$-deformed 2D Quantum Field Theories, JHEP 10 (2016) 112, [arXiv:1608.05534]. * (48) G. Hayward, Gravitational action for space-times with nonsmooth boundaries, Phys. Rev. D 47 (1993) 3275–3280. * (49) D. Brill and G. Hayward, Is the gravitational action additive?, Phys. Rev. D 50 (1994) 4914–4919, [gr-qc/9403018]. * (50) J. Hartle and R. Sorkin, Boundary Terms in the Action for the Regge Calculus, Gen. Rel. Grav. 13 (1981) 541–549. * (51) L. Lehner, R. C. Myers, E. Poisson, and R. D. Sorkin, Gravitational action with null boundaries, Phys. Rev. D 94 (2016), no. 8 084046, [arXiv:1609.00207]. * (52) J. Brown and M. Henneaux, Central Charges in the Canonical Realization of Asymptotic Symmetries: An Example from Three-Dimensional Gravity, Commun. Math. Phys. 104 (1986) 207–226. * (53) T. Takayanagi, Holographic Dual of BCFT, Phys. Rev. Lett. 107 (2011) 101602, [arXiv:1105.5165]. * (54) M. Fujita, T. Takayanagi, and E. Tonni, Aspects of AdS/BCFT, JHEP 11 (2011) 043, [arXiv:1108.5152]. * (55) J. Erdmenger, M. Flory, and M.-N. Newrzella, Bending branes for DCFT in two dimensions, JHEP 01 (2015) 058, [arXiv:1410.7811]. * (56) M. Taylor, TT deformations in general dimensions, arXiv:1805.10287. * (57) T. Hartman, J. Kruthoff, E. Shaghoulian, and A. Tajdini, Holography at finite cutoff with a $T^{2}$ deformation, JHEP 03 (2019) 004, [arXiv:1807.11401]. * (58) R. Arnowitt, S. Deser, and W. Misner, The dynamics of General Relativity, in Gravitation: an introduction to current research (L. Witten, ed.), ch. 7, pp. 227–265. Wiley, New York, 1962. * (59) J. D. Brown and J. W. York, Quasilocal energy and conserved charges derived from the gravitational action, Phys. Rev. D 47 (1993) 1407–1419. * (60) M. Guica and R. Monten, $T\bar{T}$ and the mirage of a bulk cutoff, arXiv:1906.11251. * (61) V. Balasubramanian, B. D. Chowdhury, B. Czech, and J. de Boer, Entwinement and the emergence of spacetime, JHEP 01 (2015) 048, [arXiv:1406.5859]. * (62) R. Abt, J. Erdmenger, H. Hinrichsen, C. M. Melby-Thompson, R. Meyer, C. Northe, and I. A. Reyes, Topological Complexity in AdS3/CFT2, Fortsch. Phys. 66 (2018), no. 6 1800034, [arXiv:1710.01327]. * (63) B. Chen, B. Czech, and Z.-z. Wang, Cutoff Dependence and Complexity of the CFT2 Ground State, arXiv:2004.11377. * (64) C. Teitelboim, Gravitation and Hamiltonian Structure in Two Space-Time Dimensions, Phys. Lett. B 126 (1983) 41–45. * (65) R. Jackiw, Lower Dimensional Gravity, Nucl. Phys. B 252 (1985) 343–356. * (66) D. J. Gross, J. Kruthoff, A. Rolph, and E. Shaghoulian, $T\overline{T}$ in AdS2 and Quantum Mechanics, Phys. Rev. D 101 (2020), no. 2 026011, [arXiv:1907.04873]. * (67) D. J. Gross, J. Kruthoff, A. Rolph, and E. Shaghoulian, Hamiltonian deformations in quantum mechanics, $T\bar{T}$, and the SYK model, Phys. Rev. D 102 (2020), no. 4 046019, [arXiv:1912.06132]. * (68) L. Susskind and Y. Zhao, Switchbacks and the Bridge to Nowhere, arXiv:1408.2823. * (69) D. Carmi, S. Chapman, H. Marrochio, R. C. Myers, and S. Sugishita, On the Time Dependence of Holographic Complexity, JHEP 11 (2017) 188, [arXiv:1709.10184]. * (70) J. Hernandez, R. C. Myers, and S.-M. Ruan, Quantum Extremal Islands Made Easy, PartIII: Complexity on the Brane, arXiv:2010.16398. * (71) S. C. Davis, Generalized Israel junction conditions for a Gauss-Bonnet brane world, Phys. Rev. D 67 (2003) 024030, [hep-th/0208205]. * (72) A. C. Wall, Maximin Surfaces, and the Strong Subadditivity of the Covariant Holographic Entanglement Entropy, Class. Quant. Grav. 31 (2014), no. 22 225007, [arXiv:1211.3494]. * (73) B. Czech, J. L. Karczmarek, F. Nogueira, and M. Van Raamsdonk, The Gravity Dual of a Density Matrix, Class. Quant. Grav. 29 (2012) 155009, [arXiv:1204.1330]. * (74) C. Akers, J. Koeller, S. Leichenauer, and A. Levine, Geometric Constraints from Subregion Duality Beyond the Classical Regime, arXiv:1610.08968. * (75) J. Camps, The Parts of the Gravitational Field, arXiv:1905.10121. * (76) N. Engelhardt and A. C. Wall, Extremal Surface Barriers, JHEP 03 (2014) 068, [arXiv:1312.3699].
16k
arxiv_papers
2101.01186
# Gravitational tuning forks and hierarchical triple systems Vitor Cardoso CENTRA, Departamento de Física, Instituto Superior Técnico – IST, Universidade de Lisboa – UL, Avenida Rovisco Pais 1, 1049 Lisboa, Portugal Francisco Duque CENTRA, Departamento de Física, Instituto Superior Técnico – IST, Universidade de Lisboa – UL, Avenida Rovisco Pais 1, 1049 Lisboa, Portugal Gaurav Khanna Department of Physics and Center for Scientific Computing and Visualization Research, University of Massachusetts, Darthmouth, MA 02747 Department of Physics, The University of Rhode Island, Kingston, RI 02881 ###### Abstract We study gravitational wave (GW) emission in the strong-field regime by a hierarchical triple system composed of a binary system placed in the vicinity of a supermassive black hole (SMBH). The LIGO-Virgo collaboration recently reported evidence for coalescences with this dynamical origin. These systems are common in galactic centers and thus are a target for the space-based LISA mission as well as other advanced detectors. Doppler shifts, aberration, lensing and strong amplitude modulations are features present in the GW signal from these systems, built into our framework and with no need for phenomenological patches. We find that the binary can resonantly excite the quasinormal modes of the SMBH, as in the resonant excitation of two tuning forks with matching frequencies. The flux of energy crossing the SMBH horizon can be significant, when compared with that from standard extreme-mass-ratio inspirals. Therefore, these triple systems are excellent probes of strong- field physics and of the BH nature of compact objects. Introduction. Since the birth of the gravitational-wave (GW) era in 2015 PhysRevLett.116.061102 , dozens of GW events have been detected Abbott:2020niy . Other detectors will soon join the ground-based network and further improve our ability to measure GWs in the $1-10^{3}$ Hz frequency range Akutsu:2018axf ; ET . The space-based LISA mission will extend detection to the $\sim 10^{-5}-10^{-1}$ Hz window. GWs with these frequencies are emitted in galactic centers by supermassive black holes (SMBHs) and extreme-mass-ratio inspirals (EMRIs), but also by cosmological sources Barack:2018yly ; Barausse:2020rsu . The cover of such a broad spectrum will allow us to test General Relativity with unprecedented precision over a wide range of scales, and to answer questions regarding the nature of compact objects, of dark matter and dark energy Barack:2018yly ; Barausse:2020rsu . However, recent results question the validity of the “standard” binary system. During its third observation run, the LIGO-Virgo collaboration detected three BH binary coalescences LIGOScientific:2020stg ; Abbott:2020khf ; Abbott:2020tfl ; Abbott:2020uma , unlikely to be composed by two first- generation BHs Liu:2020gif ; Fragione:2020han . Instead, their components are thought to be remnants of previous coalescences, forming what is called a “hierarchical merger” Liu:2020gif ; Abbott:2020tfl ; Fragione:2020han ; Martinez:2020lzt ; Lu:2020gfh . Generally, these require the presence of a third body to induce coalescence. The Zwicky Transient Facility Graham:2019qsw ; 2019PASP..131a8002B reported an electromagnetic counterpart to one of these events, GW195021 Graham:2020gwr , consistent with the presence of the BH binary in an active galactic nuclei (AGN) Bartos:2016dgn ; Stone:2016wzz ; 2019ApJ…884L..50M ; RevModPhys.82.3121 ; Ghez_2008 , reinforcing the claim that its components were part of a hierarchical triple system. “Hierarchical” here refers to the distinct length scales between the orbit of the BH binary and the one of its center-of-mass (CM) around the third body. Hierarchical triple systems are common in a variety of astrophysical scenarios, such as, globular clusters Zevin:2018kzq ; Martinez:2020lzt , AGNs Bartos:2016dgn ; 10.1093/mnras/stw2260 ; Chen:2018axp ; Toubiana:2020drf , and other dense stellar environments OLeary:2016ayz ; 2016MNRAS.463.2109R ; Portegies_Zwart_2000 . Around 90$\%$ of low mass binaries with periods shorter than 3 days are expected to belong to some hierarchical structure 2006AA…450..681T ; Pribulla:2006gk ; Robson:2018svj . The above motivated recent studies on the dynamics and GW emission in hierarchical triple systems Kozai-Lidov resonances, in particular, have attracted some attention 1962AJ…..67..591K ; doi:10.1146/annurev- astro-081915-023315 ; poisson_will_2014 . These describe secular changes in the binary eccentricity and inclination with respect to the orbit described by its CM around the third object. This mechanism triggers periods of high eccentricity ($e\sim 1$) where GW emission increases significantly, potentially inducing coalescence in eccentric orbits detectable by LISA Hoang:2019kye ; Randall:2019sab ; Randall:2019znp ; Deme:2020ewx , which may enter the LIGO-Virgo band still at high eccentricities Antonini:2012ad ; Antonini_2016 ; Hoang_2018 ; Zevin:2018kzq . Moreover, it can lead to GW bursts at periapsis PhysRevD.85.123005 ; Gupta:2019unn . A direct integration of the equations of motion confirms that GWs from these systems have unique features Gupta:2019unn , which may be detected indirectly via radio observations of binary pulsars Suzuki:2020zbg . There are also attempts at modeling the effects of a third body directly into the waveform. These include Doppler shifts 10.1093/mnras/stv172 ; Meiron:2016ipr ; Randall:2018lnh ; Wong:2019hsq ; Han:2018hby , relativistic beaming effects Torres- Orjuela:2018ejx ; Torres-Orjuela:2020cly , gravitational lensing Ezquiaga:2020dao ; Ezquiaga:2020gdt and other dynamical effects in triple systems caused by the third-body Yu:2020dlm ; Bonga:2019ycj ; Yang:2019iqa . Studies so far are restricted to the (post-)Newtonian regime and cannot capture strong-field effects. Here, we take a first step towards this direction, and investigate GWs from binaries around SMBHs. Our methods can probe resonant excitation of quasinormal modes (QNMs) in triple systems, and capture for free all of the relativistic effects which have so far been included at a phenomenological level only. We adopt units where $c=G=1$. Setup: Hierarchical triple systems. We are interested in a setup where a small binary (SB) of compact objects is in the vicinity of a “large” BH (larger than all the lengthscales of the SB), as illustrated in Fig. 1. The SB is taken to be a small perturbation in a background described by the geometry of the massive BH, which in vacuum must belong to the Kerr family. We use Boyer- Lindquist coordinates $\\{t,r,\theta,\varphi\\}$ PhysRevLett.11.237 in our study and define $\Sigma\coloneqq r^{2}+a^{2}\cos^{2}\theta$ and $\Delta\coloneqq r^{2}-2Mr+a^{2}$. There is an event horizon at $r_{+}=M+\sqrt{M^{2}-a^{2}}$. The SB is modeled as composed of two point particles $\pm$. The SB components also carry each a scalar charge $\alpha$ in our setup, which allows us to study the scalar radiation problem and compare to the more complex gravitational setup. Results for energy fluxes or scalar amplitudes scale in a trivial way with $\alpha$. Since we will only discuss normalized quantities, the actual value of the scalar charge $\alpha$ is not relevant. If $\tau$ denotes the proper time of each point particle along the world line $z^{\mu}(\tau)=(t_{0}(\tau),r_{0}(\tau),\theta_{0}(\tau),\varphi_{0}(\tau))$, the corresponding stress-energy tensor is $\displaystyle T^{\mu\nu}(x)^{\pm}$ $\displaystyle=$ $\displaystyle m_{0}^{\pm}\int_{-\infty}^{+\infty}\delta^{(4)}(x-z(\tau))\frac{dz^{\mu}}{d\tau}\frac{dz^{\nu}}{d\tau}d\tau\,,$ (1) with $\int\int\int\int\delta^{(4)}(x)\sqrt{-g}d^{4}x\equiv 1$ and $m_{0}^{\pm}$ is the rest mass of each component of the compact binary. First-order perturbations on the Kerr spacetime are described by Teukolsky’s master equation Teukolsky:1973ha ${\cal L}_{s}\Psi=\Sigma\,\mathcal{T}$, where ${\cal L}$ is a second-order differential operator, $s$ refers to the “spin weight” of the perturbation field (e.g., $s=0,\pm 2$ for scalars and tensors, respectively), and $\mathcal{T}$ is a spin-dependent source term Teukolsky:1973ha . To compute the source $\mathcal{T}$, we need to prescribe the motion of the SB. We take the CM at $r=R(\tau)$ to either be static at some fixed radius, to describe a timelike equatorial circular orbit around a Kerr BH, or then a simple plunge. For the SB inner motion, we take elliptic orbits around the CM, such that $\varphi^{\pm}=\Omega_{\rm CM}t\pm\epsilon_{\varphi}\sin{\omega_{0}t}\,,\quad\theta^{\pm}=\pi/2\pm\epsilon_{\theta}\cos{\omega_{0}t}\,,$ (2) where $\epsilon_{\theta},\epsilon_{\varphi}\ll 1$ parametrize the two axis of the ellipse $\delta R_{\theta}\equiv\epsilon_{\theta}R$, $\delta R_{\varphi}\equiv\epsilon_{\varphi}R$ of the SB and $\Omega_{\text{CM}}$ is the angular velocity of the CM. Note that $\Omega_{\rm CM}$ and $\omega_{0}$ are coordinate frequencies, while the proper oscillation frequency of the SB, $\omega_{0}^{\prime}$, is obtained by a rescaling with the time component of the 4-velocity of the CM, i.e. $\omega_{0}^{\prime}=U_{\rm CM}^{t}\omega_{0}$. For concreteness, we focus exclusively on equal-mass binaries, $m_{0}^{\pm}=m_{0}$ and a highly eccentric orbit with $\epsilon_{\theta}=0$ (we do not see any qualitatively new phenomena in the general case; this particular choice could mimic high-eccentricity binaries driven by Kozai-Lidov resonances). A physical relation between $\epsilon_{\varphi}$ and $\omega_{0}$ must be imposed. In the SB’s rest frame, $\delta R^{\prime}_{\varphi}\propto 1/(\omega^{\prime}_{0})^{2/3}$, where the prime refers to proper quantities. For SBs on circular geodesics, for example, doing the appropriate rescaling $\omega_{0}^{\prime}=U_{\rm CM}^{t}\omega_{0}$ and $\delta R_{\varphi}=\Delta/\Sigma\,\cdot\delta R^{\prime}_{\varphi}$, we find $\displaystyle\epsilon_{\varphi}\propto\frac{\Delta}{\Sigma}\frac{1}{R(U_{\rm CM}^{t}\omega_{0})^{2/3}}\,.$ (3) This relation assumes that the scalar charge $\alpha$ is much smaller than unity and does not affect the motion of the SB in any meaningful way. Figure 1: Equatorial slice of a spacetime with a hierarchical triple system, where one component is a central SMBH. We place a small binary (SB) of frequency $\omega_{0}$ orbiting the SMBH. At the innermost stable circular orbit (ISCO), timelike circular motion is marginally stable. High-frequency GWs are (semi-) trapped at the light ring (LR). Such motion is unstable, and can be associated with the “ringdown” excited during mergers. Among other effects, here we show that the LR can be excited by tuning $\omega_{0}$. We are looking for possible resonances in this triple system, which may happen when the forcing frequency equals natural frequencies of the system. There are three important frequencies in the problem: that of the CM, that of null geodesics on the light ring (LR), and the angular velocity of the BH horizon $\Omega_{H}=a/(2Mr_{+})$ Bardeen:1972fi . Close to the BH all are of order $\mathcal{O}(1/M)$, which in fact are also of the order of the QNM frequencies of the central BH Berti:2009kk . To have $M\omega_{0}\sim 1$, we need to ensure $\delta R_{\varphi}/m_{0}\sim(M/m_{0})^{2/3}$. For a SMBH with $M\sim 10^{4}-10^{6}M_{\odot}$, like Sagittarius A*, and a SB composed by stellar- mass BHs with $m_{0}\sim 1-100M_{\odot}$, this would correspond to $\delta R/m_{0}\sim 10^{2}-10^{4}$. Therefore, the SB can probe the central BH while still well within the inspiral phase of its evolution. Note also that even though Teukolsky’s equation assumes very large mass ratios, results in the literature have shown that it is able to reproduce Numerical Relativity for mass ratios of order 10 Sperhake:2011ik ; Rifat:2019ltp . Hence, our results might extend to the case of an intermediate mass black holes orbiting SMBHs. Numerical implementation. We used two different numerical schemes to solve Teukolsky’s equation. One works in the time domain, and it smooths the pointlike character of the SB constituents Krivan_1997 ; LopezAleman:2003ik ; Pazos_valos_2005 ; Sundararajan:2007jg . The other technique is based on separation of angular variables using spheroidal harmonics Berti:2005gp in the frequency domain, where one can apply standard Green function techniques Davis:1971gg ; Mino:1997bx ; Cardoso:2002ay ; Berti:2010ce ; Cardoso:2019nis . Both approaches are well documented and have been widely tested in the past. Both codes were compared with analytical estimates in the low-frequency regime, obtained using matched asymptotic techniques Starobinsky:1973aij ; Poisson:1993vp ; Cardoso:2019nis . Results from these independent codes are consistent with each other and with analytical estimates. Resonant excitation of QNMs. --- Figure 2: Energy output when a SB stands at the ISCO of a SMBH of spin $a=0.9M$, as a function of the orbital frequency of the SB components, $\omega_{0}$. The modal energy output, as measured by ${}_{-2}\mathcal{R}$, peaks at a finite $\omega_{0}$ extremely well described by the lowest QNM (cf. Table 1). Also shown is the flux integrated over all modes: it has a substantial component going down the SMBH horizon, and the total flux at infinity is modulated by QNM contributions. Here, $\hat{\omega}_{\ell m}\equiv M\omega_{\rm QNM}/2$. $\ell$ $s$ $a/M$ $M\omega_{\text{QNM}}/2$ $M\omega_{0_{\text{LR}}}$ $M\omega_{0_{\text{ISCO}}}$ ${}_{s}\mathcal{R}_{\text{LR}}$ ${}_{s}\mathcal{R}_{\text{ISCO}}$ 2 0 0 0.242 0.242 0.189 4.5 2.0 2 -2 0 0.186 0.175 0.156 0.6 1.5 2 -2 0.9 0.335 0.332 0.319 88.0 0.8 3 0 0 0.338 0.337 0.255 10.0 2.5 3 -2 0 0.300 0.289 0.250 2.0 2.3 3 -2 0.9 0.522 0.520 0.500 515.8 2.7 4 0 0 0.434 0.433 0.317 21.6 3.0 4 -2 0 0.405 0.395 0.326 5.6 3.0 4 -2 0.9 0.705 0.704 0.675 1896.4 5.4 Table 1: Frequency $M\omega_{0\,X}$ which maximizes the energy output of a SB standing at location $X$ close to a SMBH, in a given $(\ell,\ell)$ mode, as measured by the ratio ${}_{s}\mathcal{R}$ ($s=0,-2$ for scalar or gravitational perturbations, respectively). The SB CM is static, and sitting at the LR or at the ISCO. Notice the excellent agreement with the lowest QNM frequency. The results for orbiting SBs are similar. We now use the SB as a tuning fork, placing it at some fixed radius, with its CM fixed with respect to distant observers, and letting its frequency $\omega_{0}$ vary. In flat space, this system radiates a (time-averaged) scalar flux in the $\ell\,,m$ mode ($J_{\nu}(z)$ is a Bessel function of first kind NIST:DLMF ) ${}_{0}\dot{E}_{N\,\ell\,m}=m_{0}^{2}\alpha^{2}\epsilon_{\varphi}^{4}\frac{\Gamma\left(\ell+3/2\right)}{64\sqrt{\pi}\,\ell!\,R}\,m^{4}\,\omega_{0}\,J_{\ell+1/2}^{2}(R\,\omega_{0})\,,$ (4) and a similar but more cumbersome expression for the Newtonian gravitational- wave flux ${}_{-2}\dot{E}_{N\,\ell\,m}$. Define an estimate of the SMBH impact through the ratio ${}_{s}\mathcal{R}_{\ell\,m}=\,_{s}\dot{E}_{\ell\,m}/_{s}\dot{E}_{N\,\ell\,m}\,.$ (5) Our results indicate that at large distances $R$ this ratio tends to unity, as it should on physical grounds. Figure 2 shows the behavior of ${}_{-2}\mathcal{R}_{33}$ as the SB frequency $\omega_{0}$ changes, for an SB sitting at the ISCO of a SMBH. The behavior is similar for other modes and fields. We observe a peak which we identify as a resonant excitation of the $\ell=m=3$ QNM. As shown in Table 1, the location of the peak is well described by the lowest QNM frequency Berti:2009kk , for general binary locations. When the SB is placed at the LR, the agreement is excellent (better than 1% for scalars, and 4% for GWs for the lowest modes $\ell\,m$ modes). Recall that QNMs can be interpreted as waves marginally trapped in unstable orbits on the photon-sphere Cardoso:2008bp . We therefore arrive at the first result of this paper: a hierarchical triple system behaves as a driven harmonic oscillator georgi1993physics , where the SB is the external harmonic force and the central BH the (damped) oscillator. This behavior is analogous to the Purcell effect in quantum electrodynamics PhysRev.69.37 , describing the enhancement in the spontaneous decay of a quantum emitter inside a cavity, when its frequency matches those of the modes of the field inside the cavity. Our results are consistent with recent findings PhysRevLett.110.237401 , namely that the spatially independent (i.e. independent of $R$) contribution to the power spectrum in Fig. 2 is described by a Lorentzian curve $\mathcal{R}\propto\omega_{\text{QNM}}^{2}/(\omega_{\text{QNM}}^{2}+4Q^{2}(\omega_{0}-\omega_{\text{QNM}})^{2})$, where $Q$ is the quality factor of the central BH. Our results are consistent with and extend those of Ref. Thornburg:2019ukt , where resonant excitation of QNMs was observed for EMRIs in eccentric orbits, during passage on the periapsis. The effect is stronger the closer the particle can get to the LR, as also conclude in Ref. Price:2015gia . As a rule of thumb, the flux peaks at lower frequencies the further the SB is placed from the BH, in agreement with blueshift/redshift corrections. Note that $\mathcal{R}$ smaller than unity does not imply that the system is emitting less energy than expected, since a portion of the radiation falls into the BH. Also, a possible CM orbital motion contributes to a shift in the resonant frequencies by $\pm m\Omega_{\text{CM}}$, fully consistent with our results. The maximum value of $\mathcal{R}$ in the entire $(R,\omega_{0})$ parameter space does not occur precisely at the LR, but close to it. The maximum is attained at locations $R$ closer to the horizon for large $\ell$. Finally, the magnitude of the resonance grows with $\ell$. For a fixed CM location $R$ and multipole $\ell$ we searched for $\omega_{0}$ for which ${}_{s}\mathcal{R}$ is a maximum ${}_{s}\mathcal{R}_{\rm peak}$. We find an exponential dependence on $\ell$, ${}_{s}\mathcal{R}_{\rm peak}\sim a+b\exp(c\cdot\ell)$, at large $\ell$ with $a,\,b,\,c$ constants. Total integrated flux. Ours is a mode decomposition in terms of harmonics of the central BH, thus radiation has support in higher modes as the binary is placed further away from it Berti:2005gp ; Gualtieri:2008ux . In general, therefore, the lowest modes will not be dominant and one needs to sum a sufficient amount of modes to understand total fluxes. Already for a SB at the ISCO of a non-rotating BH we find that the GW flux at infinity is comparable to that at the horizon of the SMBH. As seen in Fig. 2, the effect is more dramatic when spin is included, the flux crossing the horizon can be orders of magnitude larger than that at infinity, even including superradiant modes Brito:2015oca . This peculiar aspect is due to the similar length scales of the central BH horizon and the radiation wavelength. GWs are then efficiently absorbed by the BH, in clear contrast with the inspiral phase of an EMRI, whose wavelength is much larger than the BH radius. This is our second result: hierarchical triple systems where the SMBH occupies a large fraction of the SB’s sky will naturally probe strong field physics, since the fraction of radiation that falls into the SMBH is non-negligible. This will be essential for dynamical evolutions of these systems, particularly when accounting for radiation reaction effects. For a fixed radius $R$, the field has support on higher $\ell$ modes as the SB is vibrating at higher frequencies $\omega_{0}$. If the SB is close enough to the BH, it can resonantly excite the QNMs, leading to characteristic peaks in the flux at infinity/horizon, as seen in Fig. 2. These structures correspond to the single multipolar excitations studied in the previous section. Waveforms: Doppler, aberration & lensing. --- Figure 3: Teukolsky function $\Psi$ measured by an (anti)aligned stationary observer at $r=75M$, for a SB with constant proper frequency $M\omega_{0}^{\prime}=1.0$ radially infalling from $r=30M$ with zero initial velocity. The dotted lines correspond to the CM contribution to the signal. The SB crosses $r=10M$ at $t\sim 245M$, the ISCO at $t\sim 263M$ and the LR at $t\sim 278M$. As a by-product of our methods, we can calculate waveforms from SBs close to SMBHs, which feature interesting relativistic effects. Figure 3 shows the GW signal produced when a SB, of constant proper frequency $\omega_{0}^{\prime}$ falls radially from rest into a non-rotating SMBH. The signal is shown for observers sitting along the merger direction, podal and anti-podal. The observer aligned with the SB sees it moving away, and a GW signal that is progressively redshifted both kinematically and gravitationally (the shifts – barely visible to the naked eye, are present and agree with expectations). An anti-aligned observer sees a blueshifted signal. As the SB crosses the LR, the radiation it emits is semi-trapped and the signal rings down: the large frequency of the signal is still dictated by the SB, but is now modulated by a low frequency ($\sim 0.19/M$) decay ($\sim e^{-0.1t}$). The parameters of such decay and low-frequency modulation agree remarkably well with the frequency and damping time of null geodesics at the LR. Imprints of the binary nature of the SB are clearly left on the ringdown stage, that differs visibly from that generated by a point-mass. --- Figure 4: Teukolsky function $\Psi$ measured by a stationary observer at large distances (either edge- or face-on, $\theta=\pi/2,\,0$ respectively; the face- on signal is multiplied by 100), for a SB around the ISCO of a non-rotating BH (we removed the CM contribution, which just causes a low-frequency modulation). The orbital CM period is $T_{\text{CM}}\approx 93M$ and at $t=0$ the observer is aligned with the SB. Doppler effect induces frequency shifts, relativistic beaming and gravitational lensing modulations in the amplitude. The maximum blue-shift is well described by $\omega_{\text{max}}=\omega_{0}^{\prime}\Upsilon(\,(\Upsilon+v_{\text{CM}})/(\Upsilon- v_{\text{CM}}))^{1/2}$, with $\Upsilon=\sqrt{1-2M/R}$, $M\omega_{0}^{\prime}=1$ the proper SB frequency and $v_{\text{CM}}$ is the CM velocity 1972ApJ…173L.137C ; 10.1093/mnras/stv172 . Finally, Fig. 4 shows the GW measured by stationary observers at large distances, for a SB on circular motion at the ISCO of a non-rotating BH. These are signals calculated from first-principles. We removed the (linear) CM contribution, which only induces a low-frequency modulation. Observers on the equatorial plane see gravitational and Doppler-induced frequency shifts, consistent with analytical predictions 1972ApJ…173L.137C ; 10.1093/mnras/stv172 when the CM is moving towards the observer. The amplitude of the wave can vary by orders of magnitude because of relativistic beaming Torres-Orjuela:2018ejx ; Torres-Orjuela:2020cly ; Gupta:2019unn and gravitational lensing Ezquiaga:2020spg ; Ezquiaga:2020gdt . The former focuses the radiation along the direction of motion, and is significant for fast CM motion. The maximum amplitude does not occur precisely when the SB is moving towards the observer ($t\sim 70M$ in Fig. 4) but slightly before, when the SB is still behind the BH with respect to the observer. This is due to lensing by the central BH, which distorts the path taken by GWs and concentrates radiation on certain directions, amplifying the signal Nambu:2015aea ; Nambu:2019sqn . This effect is more relevant for larger frequencies, when the radiation wavelength is much smaller than the BH radius. On the other hand, observers facing the plane of motion “face-on” ($\theta=0$) do not measure such modulations, since the motion of the CM is now transverse. The only feature is a modulation in amplitude coming from the CM motion (at second order), which has also been reported in Post-Newtonian studies of triple systems Gupta:2019unn . Discussion. We show that a stellar-mass binary system (or any other radiator) in the vicinity of a SMBH is an excellent probe of strong gravity. Under special circumstances, which require a fine tuning of the system, the binary can resonantly excite the modes of the SMBH, offering a unique opportunity to probe the Kerr geometry and the presence of horizons in the cosmos. Even if this fine tuning is not present, the comparable order of magnitude between the SB’s radiation wavelength and the SMBH horizon radius leads to an enhancement of energy absorption by the SMBH for any frequency. Such classes of hierarchical triple systems are abundant in AGNs, and thus our results have implications for GW astronomy, in particular for LISA which is specially designed to detect GWs originated in galactic centers Barausse:2020rsu . While quantifying a detectability rate for the resonances we described goes beyond the scope of this work, we can estimate if a SB can get close enough before being tidally disrupted due to the Hills mechanism 1988Natur.331..687H ; Addison:2015bpa ; Suzuki:2020vfw . This occurs if the tidal forces induced by the BH overcome the binary’s self gravity, which happens at a radius $R_{t}\sim 2\delta R\left(M/2m_{0}\right)^{1/3}$. The SB frequency will be related to its separation by the Kepler’s law $\omega_{0}\sim\sqrt{2m_{0}/\delta R^{3}}$. We thus find $R_{t}\lesssim 1/(M\omega_{0})^{2/3}M$. Already for $M\omega_{0}=0.2$, we find that tidal disruption happens at $R_{t}\sim 5.84M$, smaller than the ISCO of a Schwarzschild BH. Thus, SBs very close to a central BH and oscillating at relevant frequencies of the system have astrophysical interest. This is supported by more sophisticated numerical works Brown:2018gar . We neglected spin-spin effects in the motion of the SB. The corrections are proportional to $\sigma=qJ/m_{0}^{2}$, with $J$ the angular momentum of the SB Jefremov:2015gza . Again using Kepler’s law, one finds that corrections to the motion scale like $\sigma\propto q^{2/3}$, which are extremely small for the systems we consider. A follow-up to our work is to study the capacity of GW detectors to distinguish between these systems and isolated binaries. In particular, it is important to quantify the systematic errors incurred in parameter estimations from a signal originated in a hierarchical triple, using GW templates for isolated binaries. Moreover, it is important to extend our study to other motions. An interesting case is a SB describing a high-eccentricity orbit around a spinning SMBH. Such eccentric orbits can be formed naturally in non- trivial environments Cardoso:2020iji . In these orbits, the SB gets closer to the LR, which enhances the resonant excitation of the SMBH Thornburg:2019ukt and may lead to manifestations of superradiance Brito:2015oca . Another interesting triple system is a pair same-sized BHs and a third lighter compact object orbiting around them. These spacetimes have been shown to have global properties not present in isolated BHs (e.g. global QNMs) Bernard:2019nkv ; Ikeda:2020xvt and our results suggest that the lighter object can excite these global modes. Acknowledgments. We thank Ana Carvalho for producing some of the figures in this work. We are grateful to Béatrice Bonga, Emanuele Berti, Hirotada Okawa and Paolo Pani for useful comments and suggestions. We thank UMass Darthmouth and Waseda University for warm hospitality while this work was being finalized. F.D. is indebted to Nur Rifat and Asia Haque for help provided during his stay in UMass Darthmouth. V.C. acknowledges financial support provided under the European Union’s H2020 ERC Consolidator Grant “Matter and strong-field gravity: New frontiers in Einstein’s theory” grant agreement no. MaGRaTh–646597. F.D. acknowledges financial support provided by FCT/Portugal through grant No. SFRH/BD/143657/2019. G.K. would like to acknowledge support from the National Science Foundation (NSF) under awards PHY-2106755 and DMS-1912716. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska- Curie grant agreement No 101007855. We thank FCT for financial support through Project No. UIDB/00099/2020. We acknowledge financial support provided by FCT/Portugal through grants PTDC/MAT-APL/30043/2017 and PTDC/FIS- AST/7002/2020. The authors would like to acknowledge networking support by the GWverse COST Action CA16104, “Black holes, gravitational waves and fundamental physics.” ## References * (1) LIGO Scientific Collaboration and Virgo Collaboration, B. P. Abbott et al., Phys. Rev. Lett. 116, 061102 (2016). * (2) LIGO Scientific, Virgo, R. Abbott et al., 2010.14527. * (3) KAGRA, T. Akutsu et al., Nature Astron. 3, 35 (2019), [1811.08079]. * (4) M. Punturo et al., Classical and Quantum Gravity 27 (2010). * (5) L. Barack et al., Class. Quant. Grav. 36, 143001 (2019), [1806.05195]. * (6) E. Barausse et al., 2001.09793. * (7) LIGO Scientific, Virgo, R. Abbott et al., Phys. Rev. D 102, 043015 (2020), [2004.08342]. * (8) LIGO Scientific, Virgo, R. Abbott et al., Astrophys. J. Lett. 896, L44 (2020), [2006.12611]. * (9) LIGO Scientific, Virgo, R. Abbott et al., Phys. Rev. Lett. 125, 101102 (2020), [2009.01075]. * (10) LIGO Scientific, Virgo, B. Abbott et al., Astrophys. J. Lett. 892, L3 (2020), [2001.01761]. * (11) B. Liu and D. Lai, 2009.10068. * (12) G. Fragione, A. Loeb and F. A. Rasio, Astrophys. J. 902, L26 (2020), [2009.05065]. * (13) M. A. Martinez et al., Astrophys. J. 903, 67 (2020), [2009.08468]. * (14) W. Lu, P. Beniamini and C. Bonnerot, 2009.10082. * (15) M. J. Graham et al., Publ. Astron. Soc. Pac. 131, 078001 (2019), [1902.01945]. * (16) E. C. Bellm et al., PASP131, 018002 (2019), [1902.01932]. * (17) M. Graham et al., Phys. Rev. Lett. 124, 251102 (2020), [2006.14122]. * (18) I. Bartos, B. Kocsis, Z. Haiman and S. Márka, Astrophys. J. 835, 165 (2017), [1602.03831]. * (19) N. C. Stone, B. D. Metzger and Z. Haiman, Mon. Not. Roy. Astron. Soc. 464, 946 (2017), [1602.04226]. * (20) B. McKernan et al., ApJ884, L50 (2019), [1907.03746]. * (21) R. Genzel, F. Eisenhauer and S. Gillessen, Rev. Mod. Phys. 82, 3121 (2010). * (22) A. M. Ghez et al., The Astrophysical Journal 689, 1044 (2008). * (23) M. Zevin, J. Samsing, C. Rodriguez, C.-J. Haster and E. Ramirez-Ruiz, Astrophys. J. 871, 91 (2019), [1810.00901]. * (24) N. C. Stone, B. D. Metzger and Z. Haiman, Monthly Notices of the Royal Astronomical Society 464, 946 (2016), [https://academic.oup.com/mnras/article-pdf/464/1/946/18512767/stw2260.pdf]. * (25) X. Chen and W.-B. Han, Communications Physics 1, 53 (2018), [1801.05780]. * (26) A. Toubiana et al., 2010.06056. * (27) R. M. O’Leary, Y. Meiron and B. Kocsis, Astrophys. J. Lett. 824, L12 (2016), [1602.02809]. * (28) C. L. Rodriguez et al., MNRAS463, 2109 (2016), [1601.04227]. * (29) S. F. P. Zwart and S. L. W. McMillan, (2000). * (30) A. Tokovinin, S. Thomas, M. Sterzik and S. Udry, A&A450, 681 (2006), [astro-ph/0601518]. * (31) T. Pribulla and S. M. Rucinski, Astron. J. 131, 2986 (2006), [astro-ph/0601610]. * (32) T. Robson, N. J. Cornish, N. Tamanini and S. Toonen, Phys. Rev. D 98, 064012 (2018), [1806.00500]. * (33) Y. Kozai, AJ67, 591 (1962). * (34) S. Naoz, Annual Review of Astronomy and Astrophysics 54, 441 (2016), [https://doi.org/10.1146/annurev-astro-081915-023315]. * (35) E. Poisson and C. M. Will, Gravity: Newtonian, Post-Newtonian, Relativistic (Cambridge University Press, 2014). * (36) B.-M. Hoang, S. Naoz, B. Kocsis, W. Farr and J. McIver, Astrophys. J. Lett. 875, L31 (2019), [1903.00134]. * (37) L. Randall and Z.-Z. Xianyu, 1902.08604. * (38) L. Randall and Z.-Z. Xianyu, 1907.02283. * (39) B. Deme, B.-M. Hoang, S. Naoz and B. Kocsis, Astrophys. J. 901, 125 (2020), [2005.03677]. * (40) F. Antonini and H. B. Perets, Astrophys. J. 757, 27 (2012), [1203.2938]. * (41) F. Antonini et al., The Astrophysical Journal 816, 65 (2016). * (42) B.-M. Hoang, S. Naoz, B. Kocsis, F. A. Rasio and F. Dosopoulou, The Astrophysical Journal 856, 140 (2018). * (43) B. Kocsis and J. Levin, Phys. Rev. D 85, 123005 (2012). * (44) P. Gupta, H. Suzuki, H. Okawa and K.-i. Maeda, Phys. Rev. D 101, 104053 (2020), [1911.11318]. * (45) H. Suzuki, P. Gupta, H. Okawa and K.-i. Maeda, 2006.11545. * (46) S. Cisneros, G. Goedecke, C. Beetle and M. Engelhardt, Monthly Notices of the Royal Astronomical Society 448, 2733 (2015), [https://academic.oup.com/mnras/article-pdf/448/3/2733/6007665/stv172.pdf]. * (47) Y. Meiron, B. Kocsis and A. Loeb, Astrophys. J. 834, 200 (2017), [1604.02148]. * (48) L. Randall and Z.-Z. Xianyu, Astrophys. J. 878, 75 (2019), [1805.05335]. * (49) K. W. Wong, V. Baibhav and E. Berti, Mon. Not. Roy. Astron. Soc. 488, 5665 (2019), [1902.01402]. * (50) W.-B. Han and X. Chen, Mon. Not. Roy. Astron. Soc. 485, L29 (2019), [1801.07060]. * (51) A. Torres-Orjuela, X. Chen, Z. Cao, P. Amaro-Seoane and P. Peng, Phys. Rev. D 100, 063012 (2019), [1806.09857]. * (52) A. Torres-Orjuela, X. Chen and P. Amaro-Seoane, Phys. Rev. D 101, 083028 (2020), [2001.00721]. * (53) J. M. Ezquiaga and M. Zumalacárregui, 2009.12187. * (54) J. M. Ezquiaga, D. E. Holz, W. Hu, M. Lagos and R. M. Wald, 2008.12814. * (55) H. Yu and Y. Chen, 2009.02579. * (56) B. Bonga, H. Yang and S. A. Hughes, Phys. Rev. Lett. 123, 101103 (2019), [1905.00030]. * (57) H. Yang, B. Bonga, Z. Peng and G. Li, Phys. Rev. D 100, 124056 (2019), [1910.07337]. * (58) R. P. Kerr, Phys. Rev. Lett. 11, 237 (1963). * (59) S. A. Teukolsky, Astrophys. J. 185, 635 (1973). * (60) J. M. Bardeen, W. H. Press and S. A. Teukolsky, Astrophys. J. 178, 347 (1972). * (61) E. Berti, V. Cardoso and A. O. Starinets, Class. Quant. Grav. 26, 163001 (2009), [0905.2975]. * (62) U. Sperhake, V. Cardoso, C. D. Ott, E. Schnetter and H. Witek, Phys. Rev. D 84, 084038 (2011), [1105.5391]. * (63) N. E. Rifat, S. E. Field, G. Khanna and V. Varma, 1910.10473. * (64) W. Krivan, P. Laguna, P. Papadopoulos and N. Andersson, Physical Review D 56, 3395–3404 (1997). * (65) R. Lopez-Aleman, G. Khanna and J. Pullin, Class. Quant. Grav. 20, 3259 (2003), [gr-qc/0303054]. * (66) E. Pazos-Ávalos and C. O. Lousto, Physical Review D 72 (2005). * (67) P. A. Sundararajan, G. Khanna and S. A. Hughes, Phys. Rev. D76, 104005 (2007), [gr-qc/0703028]. * (68) E. Berti, V. Cardoso and M. Casals, Phys. Rev. D 73, 024013 (2006), [gr-qc/0511111], [Erratum: Phys.Rev.D 73, 109902 (2006)]. * (69) M. Davis, R. Ruffini, W. Press and R. Price, Phys. Rev. Lett. 27, 1466 (1971). * (70) Y. Mino, M. Sasaki, M. Shibata, H. Tagoshi and T. Tanaka, Prog. Theor. Phys. Suppl. 128, 1 (1997), [gr-qc/9712057]. * (71) V. Cardoso and J. P. Lemos, Phys. Lett. B 538, 1 (2002), [gr-qc/0202019]. * (72) E. Berti et al., Phys. Rev. D 81, 104048 (2010), [1003.0812]. * (73) V. Cardoso, A. del Rio and M. Kimura, Phys. Rev. D100, 084046 (2019), [1907.01561]. * (74) A. Starobinsky, Sov. Phys. JETP 37, 28 (1973). * (75) E. Poisson, Phys. Rev. D 47, 1497 (1993). * (76) NIST Digital Library of Mathematical Functions, http://dlmf.nist.gov/, Release 1.0.26 of 2020-03-15, F. W. J. Olver, A. B. Olde Daalhuis, D. W. Lozier, B. I. Schneider, R. F. Boisvert, C. W. Clark, B. R. Miller, B. V. Saunders, H. S. Cohl, and M. A. McClain, eds. * (77) V. Cardoso, A. S. Miranda, E. Berti, H. Witek and V. T. Zanchin, Phys. Rev. D 79, 064016 (2009), [0812.1806]. * (78) H. Georgi, The Physics of Waves (Prentice Hall, 1993). * (79) E. M. Purcell, H. C. Torrey and R. V. Pound, Phys. Rev. 69, 37 (1946). * (80) C. Sauvan, J. P. Hugonin, I. S. Maksymov and P. Lalanne, Phys. Rev. Lett. 110, 237401 (2013). * (81) J. Thornburg, B. Wardell and M. van de Meent, Phys. Rev. Res. 2, 013365 (2020), [1906.06791]. * (82) R. H. Price, S. Nampalliwar and G. Khanna, Phys. Rev. D 93, 044060 (2016), [1508.04797]. * (83) L. Gualtieri, E. Berti, V. Cardoso and U. Sperhake, Phys. Rev. D 78, 044024 (2008), [0805.1017]. * (84) R. Brito, V. Cardoso and P. Pani, Lect. Notes Phys. 906, pp.1 (2015), [1501.06570]. * (85) C. T. Cunningham and J. M. Bardeen, ApJ173, L137 (1972). * (86) J. M. Ezquiaga, W. Hu and M. Lagos, 2005.10702. * (87) Y. Nambu and S. Noda, Class. Quant. Grav. 33, 075011 (2016), [1502.05468]. * (88) Y. Nambu, S. Noda and Y. Sakai, Phys. Rev. D 100, 064037 (2019), [1905.01793]. * (89) J. G. Hills, Nature331, 687 (1988). * (90) E. Addison, P. Laguna and S. Larson, 1501.07856. * (91) H. Suzuki, Y. Nakamura and S. Yamada, 2009.06999. * (92) H. Brown, S. Kobayashi, E. M. Rossi and R. Sari, Mon. Not. Roy. Astron. Soc. 477, 5682 (2018), [1804.02911]. * (93) P. I. Jefremov, O. Y. Tsupko and G. S. Bisnovatyi-Kogan, Phys. Rev. D 91, 124030 (2015), [1503.07060]. * (94) V. Cardoso, C. F. Macedo and R. Vicente, 2010.15151. * (95) L. Bernard, V. Cardoso, T. Ikeda and M. Zilhão, Phys. Rev. D 100, 044002 (2019), [1905.05204]. * (96) T. Ikeda, L. Bernard, V. Cardoso and M. Zilhão, 2010.00008.
8k
arxiv_papers
2101.01187
# Observations of a radio-bright, X-ray obscured GRS 1915+105 Motta, S. E.1,2, Kajava, J. J. E.3,4, Giustini, M.4, Williams, D. R. A.2,5, Del Santo, M.6, Fender, R.2,7, Green, D. A.8, Heywood, I.2,9,10, Rhodes, L.2, Segreto, A.6, Sivakoff, G.11, Woudt, P.A.7 1Istituto Nazionale di Astrofisica, Osservatorio Astronomico di Brera, via E. Bianchi 46, 23807 Merate (LC), Italy 2University of Oxford, Department of Physics, Astrophysics, Denys Wilkinson Building, Keble Road, OX1 3RH, Oxford, United Kingdom 3Department of Physics and Astronomy, FI-20014 University of Turku, Finland 4Centro de Astrobiología (CSIC-INTA), Camino Bajo del Castillo s/n, Villanueva de la Cañada, E-28692 Madrid, Spain 5 Jodrell Bank Centre for Astrophysics, School of Physics and Astronomy, The University of Manchester, Manchester, M13 9PL, UK 6 Istituto Nazionale di Astrofisica, IASF Palermo, Via U. La Malfa 153, 90146, Palermo, Italy 7Department of Astronomy, University of Cape Town, Private Bag X3, Rondebosch 7701, South Africa 8Astrophysics Group, Cavendish Laboratory, 19 J. J. Thomson Avenue, Cambridge CB3 0HE, UK 9Department of Physics and Electronics, Rhodes University, PO Box 94, Makhanda 6140, South Africa 10South African Radio Astronomy Observatory, Cape Town, South Africa. 11Department of Physics, University of Alberta, CCIS 4-181, Edmonton, AB T6G 2E1, Canada (Accepted XXX. Received YYY; in original form ZZZ) ###### Abstract The Galactic black hole transient GRS 1915+105 is famous for its markedly variable X-ray and radio behaviour, and for being the archetypal galactic source of relativistic jets. It entered an X-ray outburst in 1992 and has been active ever since. Since 2018 GRS 1915+105 has declined into an extended low- flux X-ray plateau, occasionally interrupted by multi-wavelength flares. Here we report the radio and X-ray properties of GRS 1915+105 collected in this new phase, and compare the recent data to historic observations. We find that while the X-ray emission remained unprecedentedly low for most of the time following the decline in 2018, the radio emission shows a clear mode change half way through the extended X-ray plateau in 2019 June: from low flux ($\sim$ 3mJy) and limited variability, to marked flaring with fluxes two orders of magnitude larger. GRS 1915+105 appears to have entered a low- luminosity canonical hard state, and then transitioned to an unusual accretion phase, characterised by heavy X-ray absorption/obscuration. Hence, we argue that a local absorber hides from the observer the accretion processes feeding the variable jet responsible for the radio flaring. The radio–X-ray correlation suggests that the current low X-ray flux state may be a signature of a super-Eddington state akin to the X-ray binaries SS433 or V404 Cyg. ###### keywords: accretion, accretion discs – black hole physics – X-rays: binaries – stars: jets ††pubyear: 2020††pagerange: Observations of a radio-bright, X-ray obscured GRS 1915+105–Observations of a radio-bright, X-ray obscured GRS 1915+105 ## 1 Introduction In black hole (BH) X-ray binaries a stellar mass black hole accretes via an accretion disc formed by matter stripped from a low-mass companion star. BH X-ray binaries are typically transient systems, i.e. they alternate between long states of quiescence, characterised by a luminosity typically of the order of $L\sim 10^{34}\,{\rm erg\,\,s^{-1}}$ (see Wijnands et al. 2015), and relatively short outbursts, during which their luminosity can reach $\sim 10^{39}\,{\rm erg\,\,s^{-1}}$. During outbursts these systems show clear repeating patterns of behaviour across various accretion states, each associated with mechanical feedback in the form of winds and relativistic jets (e.g., Fender et al. 2009, Ponti et al. 2012). The hard states, characterised by highly variable X-ray emission dominated by hard photons, are associated with steady radio jets (Fender et al., 2004). In a few occasions, cold (i.e., consistent with being not ionised) winds, which appear to co-exist with the radio jets, have been observed in the optical band during the hard state (Muñoz-Darias et al., 2016), casting doubts on the idea according to which jets and winds are associated to different accretion states, and therefore cannot co-exist. In the X-ray low-variability soft states, X-ray spectra are dominated by thermal emission from a geometrically thin, optically thick accretion disc, the radio emission is quenched111Any residual radio emission in the soft state has been so far associated with ejecta launched before the transition to the soft state (see e.g. Bright et al. 2020). , and X-ray (ionised) winds are seen (Ponti et al., 2014; Tetarenko et al., 2018). In between these two states lie the intermediate states, with properties in between the hard and the soft state, and during which short- lived, powerful relativistic radio ejections are observed (Fender et al., 2009). Quasi-simultaneous X-ray and radio observations of BH X-ray binaries have been fundamental to the study of the connection between the accretion and the jet production mechanism, which led to the establishment of a disc-jet coupling paradigm (Fender et al., 2004). Such a coupling gives rise in the hard state to a well-known non-linear relation between the X-ray and the radio luminosity, known as the radio–X-ray correlation (e.g., Gallo et al. 2003 and Corbel et al. 2003), which also encompasses AGN when a mass scaling term is considered (Merloni et al., 2003; Falcke et al., 2004; Plotkin et al., 2012; Gültekin et al., 2019). GRS 1915+105 is one of the most well studied Galactic BH X-ray binaries, which firsts appeared as a bright transient in August 1992 and remained very bright in X-rays and in radio until recently (Negoro et al., 2018). GRS 1915+05 was the first Galactic source observed to display relativistic super-luminal radio ejections Mirabel & Rodríguez (1994), and it is still considered the archetypal galactic source of relativistic jets. This system, located at a radio parallax distance of 8.6${}^{+2.0}_{-1.6}$ kpc, hosts a stellar mass black hole (12.4${}^{+2.0}_{-1.8}$M⊙ Reid et al. 2014), believed to accrete erratically close to the Eddington limit. Several characteristic X-ray variability patterns observed in the X-ray light curve of GRS 1915+105 (Belloni et al., 2000) are believed to reflect transitions from and to three accretion states: two soft states (A and B), and a hard state, C, all slightly different from the canonical states seen in other BH binaries (Belloni & Motta, 2016). State A and B are characterised by limited X-ray variability, and a substantial contribution from an accretion disc with a variable temperature that can reach 2 keV. State C shows high X-ray variability and no disc contribution to the X-ray spectrum, and is known to be associated with steady radio jets (Rushton et al., 2010). Such jets appear as flat-top periods in the radio light curve, characterised by relatively high radio flux densities ($\sim$ 100 mJy beam-1), an optically thick radio spectrum, and a flat low-flux X-ray light curve (Pooley & Fender 1997). In the radio–X-ray plane, state C corresponds to a high-luminosity extension of the radio–X-ray correlation (Gallo et al., 2003). Radio Plateaus are generally preceded and followed by flaring periods due to the launch of relativistic ejections (Rodríguez & Mirabel 1999), which have been repeatedly resolved as extended radio jets on a range of scales from $\sim$1 mas to hundreds of arcseconds (Dhawan et al., 2000; Miller-Jones et al., 2005; Rushton et al., 2007; Fender et al., 1999; Miller-Jones et al., 2007). While GRS 1915+105 is in many ways unique, it shares many properties with more conventional transient black-hole binaries (such as GX 339–4, see, e.g., Done et al. 2004, Soleri et al. 2008), and also with some quasars (Marscher et al., 2002; Chatterjee et al., 2009), which display mass-scaled versions of GRS 1915+105’s correlated X-ray and radio behaviour close to jet ejection events. Hence, comprehending the coupling between inflow and outflow in GRS 1915+105 is important not only for X-ray binary systems, but has broader relevance for studies of active galactic nuclei, and potentially of all the jetted systems powered by accreting compact objects (Fender et al., 2004). In 2018 July ($\sim$MJD 58300), after 26 years of extreme X-ray and radio activity, GRS 1915+105 entered an unusually long period of low-flux in the X-rays and radio (Negoro et al., 2018; Motta et al., 2019), which made some to believe that quiescence was close. Around the end of 2019 March (MJD 58600), GRS 1915+105 entered a sudden further X-ray dimming (Homan et al., 2019; Rodriguez et al., 2019), which reinforced the hypothesis that the 26-year long outburst of GRS 1915+105 was nearing an end. However, only days later, on 2019 May 14th (MJD 58617), renewed flaring activity at different wavelengths appeared to invalidate the quiescence hypothesis. After approximately a month of marked multi-wavelength activity (Iwakiri et al., 2019; Miller et al., 2019b; Neilsen et al., 2019; Jithesh et al., 2019; Vishal et al., 2019; Svinkin et al., 2019; Koljonen et al., 2019; Balakrishnan et al., 2019; Trushkin et al., 2019; Motta et al., 2019), GRS 1915+105 entered a new X-ray low-flux state. X-ray observations with Swift, NuStar and Chandra taken during this second low phase showed hard spectra characterised by heavy and occasionally partial covering absorption with equivalent column densities $N_{\rm H}$>$3\times 10^{23}$ cm-2, i.e. over an order of magnitude larger then the usual equivalent column density in the direction of GRS 1915+105 (Miller et al., 2019a; Koljonen & Tomsick, 2020; Miller et al., 2020; Balakrishnan et al., 2020). The presence of intrinsic absorption in GRS 1915+105 has never previously been reported, and have been observed only rarely in other X-ray binaries. Two notable exceptions are the BH V404 Cyg (see, e.g., Życki et al. 1999 and Motta et al. 2017a), which showed clear signatures of heavy and variable intrinsic absorption during both the outbursts monitored in the X-rays, and SS 433, which is believed to be obscured by its own inflated accretion disc (Fabrika, 2004). In both these systems obscuration was the consequence of erratic (V404 Cyg) or sustained (SS 433) super-Eddington accretion, which in both cases is associated with extreme activity of the radio jets (Spencer, 1979; Miller-Jones et al., 2019). In this paper we report on the behaviour of GRS 1915+105 based on the long- term monitoring operated by a number of X-ray All-sky monitors and radio facilities. We compare the recent (as in 2020) evolution of the systems with its past behaviour, with the aim of highlighting the peculiarities of the current, highly unusual state. In Section 2 we describe our data reduction and analysis, in Sec. 3 we present our results, and in Sec. 4 we will discuss our findings. Finally, in Sec. 5 we will summarise our main results and outline our conclusions. ## 2 Observations and data Analysis In this section we describe the reduction and analysis of the data from the radio and X-ray facilities used in this work. A log of the data considered is given in Tab. 1. Instrument | Energy/Frequency | Time covered (MJD) ---|---|--- Ryle Telescope | 15.5 GHz (350 MHz) | 49856–53898 AMI-LA | 15.5 GHz (5 GHz) | 54615–56925 MeerKAT | 1.28 GHz (0.86 GHz) | 57642–58926 RXTE/ASM | 2–12 keV | 50088–55859 MAXI/GSC | 2–12 keV | 55054–59169 Swift/BAT | 15–50 keV | 53347–59169 Table 1: A log of the data used in this work. For MeerKAT, the Ryle telescope and AMI-LA we give in parenthesis the bandwidth used. ### 2.1 Radio #### 2.1.1 Ryle and AMI-LA telescopes From 1995 May to 2006 June (MJD 49850 to 53900) the Ryle telescope routinely observed GRS 1915+105 as part of an extensive monitoring campaign on a number of bright radio transients. In 2006 the Ryle telescope was partly converted into the Arcminute Microkelvin Imager Large Array (AMI-LA) and observations of GRS 1915+105 continued until 2016 January (MJD 57400), when the array was switched off to allow the original analog correlator to be upgraded with a digital one. Observations resumed in 2016 June (MJD 57640) and continued until March 2020, when AMI-LA had to be shut down due to the Covid-19 outbreak (MJD 58926). Data from the Ryle telecope have been published by, e.g., Pooley & Fender (1997), Klein-Wolt et al. (2002), and Rushton et al. (2010), and we refer the reader to those works for details on the data reduction. The AMI-LA observations were conducted at a central frequency of 15.5 GHz with a 5 GHz bandwidth, divided into 8 channels for imaging (for the digital correlator data there were originally 4096 narrow channels). We used 3C286 as the flux/bandpass calibrator, and J1922+1530 as the interleaved phase calibrator. We reduced the data with a custom pipeline that uses the AMI reduce_dc software, which automatically flags for radio frequency interference (RFI), antenna shadowing, and hardware errors, performs Fourier transforms of the lag-delay data into frequency channels (for the analog correlator data), and then applies phase and amplitude calibrations (e.g., Perrott et al. 2013). We carried out further flagging using the Common Astronomical Software Applications (CASA) package (McMullin et al. 2007), which was also used for the interactive cleaning. For imaging we use natural weighting with a clean gain of 0.1. To measure the source flux density we use the CASA task imfit. The synthesised beam of the AMI-LA when observing at the declination of GRS 1915+105 is 40 arcsec $\times$30 arcsec. The target is unresolved in all epochs. The data have been binned differently based on the brightness of the target. When the target was relatively faint (flux density $<$10 mJy beam-1) we report the average flux measured in each epoch, which have a variable total duration of 1 to 7 hr. When the target was brighter, we split each epoch into shorter segments (down to 6-minutes long), depending on the source flux. #### 2.1.2 MeerKAT telescope As part of the ThunderKAT large survey project (Fender et al., 2016) we observed GRS 1915+105 with the MeerKAT radio interferometer 38 times. Data were obtained at a central frequency of 1.28 GHz across a 0.86 GHz bandwidth consisting of either 4096 channels or 32768 channels (in this second case data were binned for consistency to 4096 channels before any further analysis). Observations covered the period between 2018 December and 2020 November (MJD 58460 – 59168). We initially observed the target every several weeks. When AMI-LA stopped operations, we switched to a weekly monitoring of GRS 1915+105 with MeerKAT. The first MeerKAT observation consisted of an observation with a total duration 90 min, of which 60 min is on-source, 20 minutes is on the flux and band-pass calibrator, and 3 minutes on the phase calibrator. All other observations had a total on-source integration time of 15 minutes, and a flux and bandpass calibrator, and phase calibrator times of 10 and 4 minutes, respectively. We used J1939$-$6342 as the flux and band-pass calibrator, and J2011$-$0644 as the complex gain calibrator. Between 58 and 63 of the 64 available dishes were used in the observations, with a maximum baseline of 7.698 km. The subsequent analysis has been conducted via a set of Python scripts specifically tailored for the semi-automatic processing of MeerKAT data (OxKAT222https://github.com/IanHeywood/oxkat, Heywood 2020). We used CASA to flag the first and final 100 channels from the observing band, autocorrelations and zero amplitude visibilities. Then we further flagged the data to remove RFI in the time and frequency domain. Flux density scaling, bandpass calibration and complex gain calibration were all performed within CASA using standard procedures. A spectral model for the phase calibrator is derived starting from the flux and band-pass calibrator, by temporarily binning the data into 8 equal spectral windows. We then averaged the data in time (8 s) and frequency (8 channels) for imaging purposes, and we used WSClean (Offringa et al. 2012) to image the entire MeerKAT square degree field. We measured the fluxes averaging data in each epoch, so that each MeerKAT point (blue diamonds in Fig. 2, panel (a)) corresponds to a 15 minutes of on- target time, except the first point (MJD 58460), which corresponds to a 60-min integration time. The target is unresolved in all observations, and in this paper we will only consider the flux densities measured in the MeerKAT images using the imfit task in CASA. More in-depth analysis of the MeerKAT maps will be presented in a dedicated paper (Motta et al. in prep). ### 2.2 X-ray All-Sky monitors We extracted long-term light curves for GRS 1915+105 using the public data available on the web pages of RXTE/ASM (ASM333http://xte.mit.edu/ASM_lc.html) and MAXI/GSC (MAXI444http://maxi.riken.jp/top/lc.html), and from the survey data collected with the BAT telescope on board the Neil Gehrels Swift Observatory (Swift). We used the MAXI on-demand tool555http://maxi.riken.jp/mxondem/ to extract the data covering the 2–12 keV band in order to be able to directly compare the light curves from MAXI and the ASM. We converted the ASM and MAXI count rates into fluxes using an approximate counts-to-flux conversion fraction, based on the mean count rates of the Crab, which corresponds to 75 count s-1for the ASM and 3.74 count s-1for the MAXI, respectively. We note that such a count rate to flux conversion is not rigorous, but it is sufficient for our purposes. We account for any bias introduced by the conversion assuming a conservative uncertainty on the flux of 20 per cent. Owing to the larger energy interval covered by BAT (nominally 15–150 keV), a count rate to flux conversion would not be accurate when using the Swift/BAT transient monitor results (Krimm et al., 2013). Hence, we processed the BAT survey data retrieved from the HEASARC public archive by using the BatImager code developed by Segreto et al. (2010), which is dedicated to the processing of coded mask instrument data. BatImager performs image reconstruction via cross-correlation and, for each detected source, generates light curves and spectra. We processed BAT survey data from MJD 53347 to MJD 59169, and extracted one spectrum per day using the official BAT spectral redistribution matrix and a logarithmic binned energy grid. Then, we fit the spectra from 15 keV to 50 keV with a simple power-law and derived the observed flux. Using the fluxes obtained as described, we calculated a X-ray colour $C$ = $F_{\rm hard}/F_{\rm soft}$, where $F_{\rm hard}$ is the flux coming from BAT, and $F_{\rm soft}$ is the flux coming from either the ASM or MAXI. Higher colour means a harder spectrum. The ASM and MAXI data overlap with those from BAT data by several years, and overlap to each other for about two years (i.e. from MJD 55054 to 55859). This allows us to confirm that ASM and MAXI provides consistent data (see 2, panel (b)). Variability on time-scales significantly shorter than a day is known to occur during various accretion states both in the X-rays (Belloni et al., 2000) and in radio (Pooley & Fender, 1997) in the flux from GRS 1915+105. However, the aim of this work is to study the long term behaviour of this system. Thus we focused on the variability occurring on time-scales longer than a few days, rather than the details of a particular flare. Therefore, we rebinned the ASM, MAXI and BAT light curves to the same 1-day long time bins. We also extracted energy spectra in specific time intervals (see Sec. 3.2) using the on-demand MAXI tool with the default extraction parameters, and specifying the good time intervals to be used for the extraction. The spectra were then fitted within xspec (v 12.11.0). Figure 1: AMI-LA and Ryle data (panel a), RXTE/ASM data (panel b) and BAT data (panel c), covering over 16 years. Panel b, c and d are colour-coded based on the spectral X-ray colour displayed panel d, calculated as ratio between the BAT and the ASM fluxes. Redder points correspond to softer spectra. The grey points in panel b are from MAXI (the same as in Fig. 2, panel b) and are plotted to allow a comparison with the ASM data. Similarly, the grey points in panel d correspond to the colours shown in Fig. 2 for comparison. Figure 2: AMI-LA and MeerKAT data (panel a), MAXI data (panel b), and BAT data (panel c), covering approximately 14 years. The colour-coding is the same as in Fig. 1, with the difference that the spectral X-ray colour is calculated as the ratio between the BAT and the MAXI fluxes. The grey points in panel b are from the ASM (the same as in Fig. 1, panel b) and are plotted to allow a comparison with the MAXI data. Similarly, the grey points in panel d correspond to the colours shown in Fig. 1 for comparison. Figure 3: A zoom of Fig. 2, focusing on the most recent evolution of GRS 1915+105. The vertical dashed line marks the time of the change in the radio behaviour occurred on $\approx$ MJD 58617. Figure 4: MAXI unfolded average spectra extracted during different phases of the evolution of GRS 1915+105, and the ratios to the best fits. Spectra are taken around the peak of the soft flare preceding Plateau 1 (Spectrum A, red); during Plateau 1 (Spectrum B, blue), during Plateau 2 in a time interval flare-free (Spectrum C, green), during the Soft Phase (Spectrum D, magenta). The best-fit parameters are listed in Tab. 2. Figure 5: The radio–X-ray plane. We mark in grey the fluxes from all the BH transients considered by Motta et al. (2017a), plus MAXI J1820+070, based on Bright et al. (2020). The red dots mark the data from Rushton et al. (2010), who selected data corresponding to a a bright hard state, or state C. The three symbols mark the three main phases described in Sec. 3: Plateau 1, Plateau 2, and soft. ## 3 Results ### 3.1 Overall behaviour Figure 1 displays, from the top: the radio light curve taken at a central frequency of 15.5 GHz (Ryle Telescope and AMI-LA data, light and dark cyan points), the soft X-ray light curve (ASM data covering the 2–12 keV band), the hard X-ray light curve (BAT data covering the 15-50 keV band), and the X-ray colour. Figure 2 shows, from the top: the radio light-curve from data taken at a central frequency of 15.5 GHz (AMI-LA, clear blue points) and 1.28 GHz (MeerKAT, blue diamonds), the soft X-ray light curve (MAXI, covering the 2–12 keV band), and again the BAT light curve and the X-ray colour. Note that the two figures are plotted using similar time scales, and overlap by several years, but were kept separated to allow for the inspection of both the ASM and MAXI data. In both figures, all panels except the top ones are colour-coded so that redder corresponds to a softer spectrum (displayed in panel (d) in both figures). Wherever the ASM or MAXI data did not overlap with the BAT data, we left the points black. Part of the radio and X-ray data presented in Fig. 1 have been already published by, e.g., Pooley & Fender (1997), Fender et al. (1999), Pooley et al. (2010), Klein-Wolt et al. (2002) and Rushton et al. (2010). All the light curves are characterised by periods of intense flaring, interleaved by relatively short and quiet phases, both in the X-rays and in radio. In Fig. 2 we can easily identify a time when both the radio and the X-ray behaviour of GRS 1915+105 changed, i.e. around MJD 58300 (2018 July), when the source entered a first low-flux phase approximately 11 months-long, to which we refer to as Plateau 1. The average flux level observed during Plateau 1 is approximately $F\approx 0.30\times 10^{-8}\,{\rm erg\,cm^{-2}\,s^{-1}}$ and $F\approx 0.15\times 10^{-8}\,{\rm erg\,cm^{-2}\,s^{-1}}$ in the MAXI and the BAT data, respectively, and is marked with a dotted line in panels (b) and (c) in both Fig. 1 and Fig. 2 for comparison. According to the X-ray colour plotted in panel (d) of Fig. 2, this decay led the source from a relatively soft state (around MJD 58000, with colour of $\approx$0.05) to a significantly harder state, characterised by an X-ray colour of $\approx$1 (cyan in Fig. 2 and 3). The AMI-LA data show that in the radio band Plateau 1 corresponds to relatively low flux densities between 1 and 5 mJy beam-1, consistent with the lower end of the radio flux densities observed over the 11 years of activity covered by the Ryle telescope. On MJD 58613 GRS 1915+105 showed a fast decay to an even lower X-ray flux – we refer to this phase as Pre-Flare Dip. The Pre-Flare Dip can be easily discerned both in the MAXI and in the BAT data, as shown in Fig. 3. Shortly afterwards, GRS 1915+105 entered a multi-band flaring period – which we refer to as the Flaring Phase – that in the X-rays lasted approximately 1 month. The Flaring Phase, instead, appears as a few isolated points in both the MAXI and BAT light curves around MJD 58600. The inspection of the orbit-by-orbit BAT light curve (not displayed666The orbit-by-orbit BAT light curve is available at http://swift.gsfc.nasa.gov/results/transients/GRS1915p105/) shows that the flares in this phase have a variable duration of a few hours up to a few days. Given the coarse 1 day binning that we employed to compare the MAXI and BAT data, the X-ray colour displayed in Fig. 2 is not sensitive to the fast spectral changes occurring during this high-variability phase. Thus, the colour does not provide any specific information on the spectral properties of the flares, apart from the fact that they appear to be predominantly hard, hence more pronounced in the BAT curve. GRS 1915+105 subsequently entered a new X-ray plateau (Plateau 2) around MJD 58700, which was occasionally interrupted by short flares lasting approximately 1 day or less (see also Neilsen et al. 2020), again visible as isolated points in the BAT and MAXI light curves. This second plateau lasted over 13 months. Interestingly, the MeerKAT and AMI data show that the radio flaring did not cease with the X-ray and multi-band flaring, but continued for several months, until at least MJD 59100. In contrast, the X-ray flux level continued to slowly decline from $F\approx 0.08\times 10^{-8}\,{\rm erg\,cm^{-2}\,s^{-1}}$ to $F\approx 0.06\times 10^{-8}\,{\rm erg\,cm^{-2}\,s^{-1}}$ in the MAXI data (red dashed and red solid line in panel (b) in both Fig. 1 and Fig. 2), and from $F\approx 0.08\times 10^{-8}\,{\rm erg\,cm^{-2}\,s^{-1}}$ to $F\approx 0.03\times 10^{-8}\,{\rm erg\,cm^{-2}\,s^{-1}}$ in the BAT data (red dashed and red solid line in panel (c) in the same figures). The signal-to-noise ratio of both the data from MAXI and BAT was very limited in this phase, due to the low count rates from the source, but the X-ray colours measured in this phase suggest a markedly hard spectral shape. The radio flaring sampled by AMI-LA in this second plateau does not qualitatively differ from that observed previously over almost three decades777Noted that MeerKAT observes at a lower frequency than AMI-LA, and a optically thin to optically thick flare would peak sooner and higher at 15.5 GHz than at 1.28 GHz (van der Laan, 1966).. The radio emission is characterised by flares of variable amplitude and duration, spanning a flux range between 3 and 300 mJy beam-1 in the AMI-LA data, and up to 900 mJy beam-1 in the MeerKAT data, and variability on time scales from minutes to several hours. More recently, around $\sim$ MJD 59050, the MAXI light curve and the evolution of the colour in Fig. 2 and Fig. 3 shows that GRS 1915+105 returned to a flux comparable to that of Plateau 1, but with a much softer spectrum characterised by an X-ray colour of approximately 0.05. This indicates that GRS 19105+105 has likely transitioned to a significantly softer state, which, coherently with what was observed in the past (see, e.g., data around MJD 53600 in Fig. 1), features a diminished radio activity, with flux densities of approximately 10 mJy beam-1 and limited variability. This last Soft Phase lasted until $\approx$MJD 59140, when both the soft and hard X-ray flux dropped to values comparable to those observed during Plateau 2, and the radio flaring resumed, qualitatively similar to what observed prior to the softening. At no point in the past has GRS 1915+105 reached fluxes as low as those measured during Plateau 1 and 2 (see also Negoro et al. 2018), despite the presence of several low-flux phases observed both in the soft and in the hard X-rays, all in general associated with relatively low radio flux and low colours, indicative of a relatively soft state (see also Klein-Wolt et al. 2002 and Fender & Belloni 2004). The soft plateau occurred around MJD 54500 is, however, noteworthy. The lack of radio observations during such a plateau unfortunately prevents us from drawing solid conclusions on the state GRS 1915+105 was in, but it is possible that the source entered a relatively soft state (as indicated by the steep photon index measured) similar to that sampled by the source around MJD 59000. ### 3.2 Spectral analysis To further investigate the properties of the emission from GRS 1915+105 over the phases we described, we extracted four time-averaged MAXI energy spectra in specific phases of the evolution of the source, with variable exposure times chosen to avoid times of variable emission, and to guarantee similar signal-to-noise ratios. Figure 4 shows the spectra we obtained and their best fit: Spectrum A (in red, extracted in the time interval between from MJD 58034 and 58035), extracted at the peak of a soft flare preceding Plateau 1; Spectrum B, extracted during Plateau 1 (in blue, MJD 58450–58490); Spectrum C, extracted during Plateau 2 in a time interval flare-free (in green, MJD 58830–58850); Spectrum D, extracted during the Soft Phase (in magenta, MJD 59095–59116). The the best fit parameters are reported in Tab. 2. We fitted the four spectra with phenomenological models constituted by either a powerlaw, or a disc blackbody continuum, each modified by interstellar absorption (tbfeo in xspec), and an additional partially covering absorber (tbpcf in xspec, see Wilms et al. 2000), so that the models used in xspec have the form: tbfeo $\times$tbpcf$\times$(powerlaw) or tbfeo$\times$ tbpcf$\times$(discbb). We fixed the interstellar absorption parameter to $N_{\rm H}=5.0\times 10^{22}$ cm-2 (Miller et al., 2016; Zoghbi et al., 2016). In order to compare the two soft spectra (A and D) and the two hard spectra (B and C) directly, we fitted spectra A and D, and B and C with the same underlying model, and attempted to reproduce the different spectral shapes by applying additional absorption. Spectra A and D are well-fitted by a hot disc, with characteristic temperature of $\approx$1.8 keV. The very small normalisation of ($K_{\rm bb}\approx 340$) is indicative of small disc truncation radius. Spectrum D requires a multiplicative constant $K\approx$0.06, and additional uniform absorption of $(2.5\pm 0.4)\times 10^{22}\,{\rm cm}^{-2}$. The observed fluxes in the 2-10 keV band from spectrum A and D are $\sim 3.6\times 10^{-8}\,{\rm erg\,cm^{-2}\,s^{-1}}$ and $2\times 10^{-9}\,{\rm erg\,cm^{-2}\,s^{-1}}$, respectively. Fitted individually, both spectra return best fit parameters consistent with those reported above, and neither is better described by a power law continuum. We fitted spectrum B and C using a power law continuum, and we linked the power law photon index and normalisation across the two spectra, obtaining a photon index of $\mathnormal{\Gamma}=1.90\pm 0.04$. Spectrum C required additional absorption of $(220^{+50}_{-30})\times 10^{22}\,{\rm cm}^{-2}$, with a partial covering factor of $0.88\pm 0.02$. The observed fluxes in the 2–10 keV band from spectra B and C are $\sim 1.6\times 10^{-9}\,{\rm erg\,cm^{-2}\,s^{-1}}$ and $3\times 10^{-10}\,{\rm erg\,cm^{-2}\,s^{-1}}$, respectively. We note that the above analysis is based on simple phenomenological models with the aim to provide clues regarding the nature of the accretion state(s) sampled by GRS 1915+105 after July 2018. Table 2: Best fit parameters for the spectra shown in Fig. 4. The interstellar hydrogen column was fixed to $N^{\rm ISM}_{\rm H}=5\times 10^{22}{\rm cm}^{2}$. From top to bottom in column 1: multiplicative constant $K$; local absorption $N^{\rm loc}_{\rm H}$; local partial covering fraction $PCF$; disc-blackbody temperature $T_{\rm bb}$; photon index $\mathnormal{\Gamma}$; observed flux $F$ in the 2–10 keV band. The parameters marked with ∗ ($\mathnormal{\Gamma}$ in Spectra B and C, and $T_{\rm bb}$ in Spectra A and D, respectively), are linked, so that the same continuum is used to fit each pairs of spectra. The xspec models used have the form: $\textsc{tbfeo}\times\textsc{tb}\\_\textsc{pcf}\times(\textsc{powerlaw})$ or $\textsc{tbfeo}\times\textsc{tb}\\_\textsc{pcf}\times(\textsc{discbb})$. Parameter | A | B (Plateau 1) | C (Plateau 2) | D (soft phase) ---|---|---|---|--- $K$ | 1 (fixed) | 1 (fixed) | 1 (fixed) | $0.064\pm 0.002$ $N^{\rm loc}_{\rm H}$ [$\times 10^{22}cm^{2}$] | - | - | $220^{+50}_{-30}$ | $2.5\pm 0.4$ $PCF$ | - | - | $0.88\pm 0.02$ | 1 (fixed) $T_{\rm bb}$ [keV] | $1.94\pm 0.04$ ∗ | - | - | $1.94\pm 0.03^{*}$ $\mathnormal{\Gamma}$ | - | $1.91\pm 0.03$ ∗ | $1.91\pm 0.03$ ∗ | - $F$ [$\times 10^{-8}\,{\rm erg\,cm^{2}\,s^{-1}}$] | 3.6 | 0.16 | 0.03 | 0.2 $\chi^{2}$/d.o.f. | $134.03/119$ | $158.45/159$ | $12.89/12$ | $165.69/138$ ### 3.3 Radio–X-ray plane To better compare the current behaviour of GRS 1915+105 with its past behaviour, as well as with other BH transients, we placed it on the radio–X-ray plane. We measured the radio and X-ray fluxes corresponding to Plateau 1, Plateau 2, and the Soft Phase taking the mean in the three phases. Figure 5 shows the radio–X-ray points (in grey) for a number of BH transients considered in Motta et al. (2017a), plus MAXI J1820+070 from Bright et al. (2020). The error bars account for both the scatter in the values, and the uncertainty on the counts-to-flux conversion described above. GRS 1915+105 was traditionally considered an outlier on the radio–X-ray correlation that the vast majority of BH transients follow, as shown by the red dots in the figure, which mark the position that GRS 1915+105 occupied during its C state phases of its 27-years long outburst (points taken from Rushton et al. 2010). While during Plateau 2 and during the Soft Phase GRS 1915+105 still clearly lies away from the other sources on the plane, during Plateau 1 GRS 1015+105 falls on the correlation, incidentally approximately in the same position occupied by Cyg X–1. ## 4 Discussion We have presented the results of a comparative radio and X-ray study of GRS 1915+105, primarily focusing on the evolution of the source from MJD 58300 (July 2018) to MJD 59200 (November 2020). We are motivated by the fact that in 2018 the source underwent a flux decline in the X-rays that might have been interpreted as a transition to a canonical hard state, possibly preceding a long-expected quiescence (Truss & Done 2006). Our data show that, despite the low X-ray fluxes displayed lately, GRS 1915+105 is still very active in radio, and during the last several months has been showing marked activity that is qualitatively very similar to that observed in the past. This provides evidence for two important facts: first, that the correlation between radio and X-ray emission that for many years characterised GRS 1915+105 (Fender & Belloni 2004) ceased to exist sometime in June 2019; and second, that the X-ray behaviour alone–in the absence of such radio–X-ray correlation–is very misleading, as it offers only a partial view of the current state of the source. GRS 1915+105 has not entered quiescence, and might not be approaching it either (see also Neilsen et al. 2020 and Koljonen & Hovatta 2021 for a discussion of this topic). Rather, the accretion processes that must be at work to feed the jets responsible for the observed radio behaviour are for some reason not directly visible to the observer. The low X-ray fluxes observed during Plateau 1, i.e. after the exponential decline occurred in 2018 (see also Negoro et al. 2018) appear as highly unusual for GRS 1915+105. Furthermore, the almost total lack of variability in the X-ray emission, as opposed to marked radio flaring observed after June 2019 that characterises Plateau 2 (referred to as the obscured phase by Miller et al. 2020) is certainly unprecedented. Based on the data reported here, and on older data published by other authors (e.g., Klein-Wolt et al. 2002) Plateau 2 is the longest, lowest flux, and hardest X-ray plateau ever observed in GRS 1915+105, and the only one associated with marked radio flaring. Any previous plateau was a plateau both in radio and in the X-rays, and any Flaring Phase has occurred in both bands. Since the beginning of Plateau 2, absorption is a constant characteristic of the energy spectra of GRS 1915+105. Our spectral analysis results are fully consistent with those reported by previous works (Koljonen & Tomsick 2020; Miller et al. 2020; Neilsen et al. 2020; Balakrishnan et al. 2020; Koljonen & Hovatta 2021): an obscuring in-homogeneous medium is required to explain the average spectrum we extracted from Plateau 2. This agrees with the results from the Chandra spectra, which were taken when GRS 1915+105 was observed in a deep obscuration state (Miller et al., 2020). Absorption also impacted the occasional flares captured by NICER during the obscured state888Some of these flares were also detected by BAT, when sufficiently long to be seen in the 1-day averaged light curve., which still reached a high luminosity, indicating that the intrinsic X-ray luminosity of GRS 1915+105 is likely not far from the Eddington limit (Neilsen et al., 2020). The evolution of the X-ray emission during one particular flare reported by Neilsen et al. (2020), shows that the flares are characterised by harder-when-brighter spectra, and require high density and variable in-homogeneous local absorption, properties very reminiscent of the behaviour of V404 Cyg during a vast majority of the flares observed in 2015 (Motta et al. 2017b). Also reminiscent of V404 Cyg are the spectral properties of the obscured state around the occasional flares observed. A high equivalent column density absorber, which was almost completely covering the emission from the central portion of the accretion flow, was responsible for the spectrally hard and low flux emission observed during a number of plateaus during the 2015 outbursts of V404 Cyg (Motta et al. 2017a; Kajava et al. 2018). The behaviour and properties of GRS 1915+105 during the obscured state are consistent with those observed in all the systems displaying phases of strong, variable local absorption: all tend to show high-amplitude flares. Some noteworthy examples are V4641 Sgr (Revnivtsev et al., 2002), Swift J1858–0814 (Hare et al., 2020; Muñoz-Darias et al., 2020), and even some Seyfert II Galaxies (Moran et al., 2001), but perhaps the best example remains that of V404 Cyg. In the case of V404 Cyg, the behaviour observed both during the low- flux phases and during flares has been ascribed to the presence of an inflated accretion disc. Such a thick disc was likely sustained by radiation pressure induced by high accretion rate episodes, and was fragmenting out in the clumpy outflow responsible for the local and variable Compton-thick absorption which affected the spectra. Something similar might be happening in GRS 1915+105 in the obscured state, even though the obscured state in this case is significantly more extended than for V404 Cyg. Owing to the better energy resolution and signal-to-noise ratio achieved with Chandra and NICER, respectively, observations of GRS 1915+105 in the obscured state offered a deeper insight into the properties of this state. Both Neilsen et al. (2020) and Miller et al. (2020) showed that the absorber in GRS 1915+105, besides being in-homogeneous, requires a multi-temperature profile, and it is likely radially stratified. On the one hand, Miller et al. (2020) showed that at least two types of media form this absorber: an inner bound (failed) magnetic wind, and an outer, cooler component which could be a thermally-driven outflow. In this scenario, the failed wind would be responsible for the obscuration. On the other hand, Neilsen et al. (2020), who analysed data taken at a different time, showed that their results are consistent with the presence of radially stratified puffed-up outer disc, which would hide the regions closer to the BH. It seems therefore plausible that the absorber in GRS 1915+105 covers a large portion of the accretion disc, from a few hundreds gravitational radii, to the outer disc, at radii larger than tens of thousands $R_{\rm g}$. In this respect GRS 1915+105 differs from V404 Cyg. In the latter both neutral and ionised winds were being launched from the outer disc, but a cold, optically thick and partially covering absorbing material was launched from within a few hundreds of gravitational radii from the black hole, and was clearly outflowing with a velocity of the order 0.05 c (Motta et al., 2017b). Considering GRS 1915+105 in the context of the radio–X-ray plane, we see that for the first time since its discovery, during Plateau 1 the system was consistently located on the correlation traced by the majority of known BH transients. During both Plateau 2 and the Soft Phase the position of GRS 1915+105 in the radio–X-ray plane confirms that a large fraction of the X-ray flux is lost. If, for the sake of the argument, we assume that the correlation holds during Plateau 2, we estimate that the obscuration of the inner accretion flow might be causing a loss of approximately a factor $\sim$200 in X-ray luminosity, consistent with the obscuration scenario. However, note that the radio–X-ray correlation strictly holds only during the canonical hard state (Fender & Belloni 2004), when a compact steady jet is detected, and the radio and X-ray emission can be directly compared. The above estimate is intended only as an indication of the overall behaviour of the source, the emission being so heavily modified by absorption, we have no means to confirm the X-ray state of the source. Based on our results, it seems that Plateau 1 was a truly dim state, very similar to the canonical low-flux hard state typical of more well-behaved BH transients, characterised by low X-ray and radio flux, as well as low long- term variability in both bands. Such a state is certainly unusual for GRS 1915+105, which has never been observed before in a low-luminosity canonical hard state (but see Gallo et al., 2003). Instead, despite the fact that Plateau 2 has been dubbed ‘the obscured state’, it is not really a state, but rather a condition directly dependent on the presence of local absorption along our the line of sight. The accretion processes that must be feeding the markedly variable jet observed in radio must be happening beneath a complex layer of material local to the source, which shields the inner part of the accretion flow, and thus blocks a large portion of the X-ray emission. Behind this Compton-thick curtain of material, GRS 1915+105 is most likely evolving through various states and transitions, consistent with what it had been doing for 25 years until June 2018. This means that perhaps, as already proposed by Miller et al. (2020), GRS 1915+105 did not really enter the outburst phase in 1992, when it was discovered, but simply emerged from an obscured phase similar to the one we are witnessing at the time of the observations presented here. GRS 1915+105 is an important system for a number of reasons, including being in many ways a small-scale AGN. As in the case of many other Galactic BHs, its different states may correspond to a number of AGN classes, but the specific multi-band behaviour of GRS 1915+105 has clear counterparts in AGN (Miller et al., 2020). So, how does this new observed state of GRS 1915+105–heavily absorbed in X-rays and active in radio–compare to AGN? In AGN, different absorbers on different scales affect the X-ray spectrum: from dust lanes in the host galaxy, to the parsec-scale torus, to accretion-disc scale clouds/winds (see Ramos Almeida & Ricci, 2017, for a recent review). Large column densities $N_{\rm H}\sim 4{-}9\times 10^{23}$ cm-2, comparable to those measured in the X-ray spectra of GRS 1915+105 in Plateau 2, are inferred from hard X-ray observations of local radio galaxies (see Ursini et al., 2018, and references therein) and young radio sources (e.g., Mrk 668, Guainazzi et al., 2004). The radio jet activity of young radio sources is also intermittent, possibly due to the interaction with the X-ray absorbing dense circumnuclear medium. However, most of the X-ray absorbers found in these sources are likely found on parsec scales and beyond. Compton-thick absorbers on accretion disc–scales are observed in some Changing-Look AGN (e.g., Risaliti et al., 2005), but their time scales for variability, once scaled down to stellar mass black holes, are much shorter than what is observed in GRS 1915+105. Thus, a phenomenon similar to the one observed in GRS 1915+105 has not been identified in AGN, yet. Finally, while accreting super-massive BHs and stellar-mass black holes are connected by the same fundamental physics (Merloni et al., 2003; Falcke et al., 2004), which governs their inner accretion flow, their larger-scale structure differ quite appreciably. In particular, accreting stellar mass BH have a companion star, the behaviour of which can potentially greatly affect the long-term behaviour of the accretion disc. Neilsen et al. (2020) speculated that if a vertically extended outer accretion disc is responsible for the obscuration in GRS 1915+105, its formation might have been triggered by changes in the companion star, e.g. an increase in the mass supply into the accretion disc, which necessarily would trigger a change in the accretion flow from the outside-in. The fact that an equivalent of the obscured state seen in GRS 1915+105 has not been seen in AGN might support this hypothesis. ## 5 Summary and conclusions In 2018, the black hole binary GRS 1915+105, after 25 years of high-luminosity X-ray activity, decayed to a prolonged low-flux X-ray state. Due to this relatively sudden change in the X-ray behaviour of GRS 1915+105, some were led to believe that its outburst, the longest ever observed from a black hole X-ray binary, was approaching its end. We analysed the simultaneous X-ray and radio data collected over almost 3 decades with various facilities, focusing on the most recent evolution of the system. Our data show that at the beginning of the dim X-ray state GRS 1915+105 was also relatively faint in radio. Since June 2019 the system has been showing marked radio activity, characterised by the signatures of relativistic jets, and X-ray spectra affected by high and variable in- homogeneous absorption. Our results show that more recently GRS 1915+105, while still affected by heavy absorption, transitioned to a softer state, which was accompanied by a decrease in the radio flaring that resumed when GRS 1915+105 moved back to a hard(er) state. We argue that GRS 1915+105 first transitioned to a low-luminosity hard state, similar to the canonical hard state shown by many other black hole X-ray binaries, and then entered a prolonged obscured phase. In this phase the highly variable radio jets we have been observing for months must be fed by the same sort of accretion processes that have been seen often in the past in GRS 1915+105, and are now happening behind a complex layer of absorbing material. We therefore conclude that GRS 1915+105 is far from being in quiescence, even though a substantial change in the accretion flow–perhaps the launch of a powerful outflow and/or the thickening of the outer disc–must have occurred at some point around June 2019. The behaviour of GRS 1915+105 in the obscured state appears to have no counterpart in its super-massive relatives, the AGN, where the time-scales typical of similar radio-bright obscured phases (once they are scaled up in mass) are either much longer, or much shorter than in GRS 1915+105. ## Acknowledgements SEM acknowledges the Violette and Samuel Glasstone Research Fellowship programme, and the UK Science and Technology Facilities Council (STFC) for financial support. SEM and DW acknowledge the Oxford Centre for Astrophysical Surveys, which is funded through generous support from the Hintze Family Charitable Foundation. JJEK acknowledges support from the Academy of Finland grant 333112 and the Spanish MINECO grant ESP2017-86582-C4-1-R. MG is supported by the “Programa de Atracción de Talento” of the Comunidad de Madrid, grant number 2018-T1/TIC-11733. MDS and AS acknowledge financial contribution from the agreement ASI-INAF n.2017-14-H.0 and from the INAF mainstream grant. PAW acknowledges financial support from the University of Cape Town and the National Research Foundation. We also acknowledge support from the European Research Council under grant ERC-2012-StG-307215 LODESTONE. We thank the staff of the Mullard Radio Astronomy Observatory, University of Cambridge, for their support in the maintenance and operation of AMI. We thank the staff at the South African Radio Astronomy Observatory (SARAO) for scheduling the MeerKAT observations. The MeerKAT telescope is operated by the South African Radio Astronomy Observatory, which is a facility of the National Research Foundation, an agency of the Department of Science and Innovation. This research has made use of MAXI data provided by RIKEN, JAXA and the MAXI team (Matsuoka et al., 2009). All the authors wish to heartily thank Guy Pooley, who sadly recently passed away. His work was instrumental for the understanding of the radio properties of GRS 1915+105. ## Data Availability The un-calibrated MeerKAT visibility data presented in this paper are publicly available in the archive of the South African Radio Astronomy Observatory at https://archive.sarao.ac.za, subject to a standard proprietary period of one year. Data from the Swift/BAT and the RXTE/ASM are publicly available in the NASA’s HEASARC Data archive. The MAXI/GSC data are made available by RIKEN, JAXA and the MAXI team. Data that are not available thorough public archives, and all source code, will be shared on reasonable request to the corresponding author. ## References * Balakrishnan et al. (2019) Balakrishnan M., Tetarenko B., Corrales L., Reynolds M., Miller J. M., 2019, The Astronomer’s Telegram, 12848 * Balakrishnan et al. (2020) Balakrishnan M., Miller J. M., Reynolds M. T., Kammoun E., Zoghbi A., Tetarenko B. E., 2020, arXiv e-prints, p. arXiv:2012.15033 * Belloni & Motta (2016) Belloni T. M., Motta S. E., 2016, Transient Black Hole Binaries. Springer-Verlag Berlin Heidelberg, p. 61, doi:10.1007/978-3-662-52859-4_2 * Belloni et al. (2000) Belloni T., Klein-Wolt M., Méndez M., van der Klis M., van Paradijs J., 2000, A&A, 355, 271 * Bright et al. (2020) Bright J. S., et al., 2020, Nature Astronomy, 4, 697 * Chatterjee et al. (2009) Chatterjee R., et al., 2009, ApJ, 704, 1689 * Corbel et al. (2003) Corbel S., Nowak M. A., Fender R. P., Tzioumis A. K., Markoff S., 2003, A&A, 400, 1007 * Dhawan et al. (2000) Dhawan V., Mirabel I. F., Rodríguez L. F., 2000, ApJ, 543, 373 * Done et al. (2004) Done C., Wardziński G., Gierliński M., 2004, MNRAS, 349, 393 * Fabrika (2004) Fabrika S., 2004, Astrophysics and Space Physics Reviews, 12, 1 * Falcke et al. (2004) Falcke H., Körding E., Markoff S., 2004, A&A, 414, 895 * Fender & Belloni (2004) Fender R., Belloni T., 2004, ARA&A, 42, 317 * Fender et al. (1999) Fender R. P., Garrington S. T., McKay D. J., Muxlow T. W. B., Pooley G. G., Spencer R. E., Stirling A. M., Waltman E. B., 1999, MNRAS, 304, 865 * Fender et al. (2004) Fender R. P., Belloni T. M., Gallo E., 2004, MNRAS, 355, 1105 * Fender et al. (2009) Fender R. P., Homan J., Belloni T. M., 2009, MNRAS, 396, 1370 * Fender et al. (2016) Fender R., et al., 2016, in MeerKAT Science: On the Pathway to the SKA. p. 13 (arXiv:1711.04132) * Gallo et al. (2003) Gallo E., Fender R. P., Pooley G. G., 2003, MNRAS, 344, 60 * Guainazzi et al. (2004) Guainazzi M., Siemiginowska A., Rodriguez-Pascual P., Stanghellini C., 2004, A&A, 421, 461 * Gültekin et al. (2019) Gültekin K., King A. L., Cackett E. M., Nyland K., Miller J. M., Di Matteo T., Markoff S., Rupen M. P., 2019, ApJ, 871, 80 * Hare et al. (2020) Hare J., et al., 2020, ApJ, 890, 57 * Heywood (2020) Heywood I., 2020, oxkat: Semi-automated imaging of MeerKAT observations (ascl:2009.003) * Homan et al. (2019) Homan J., Neilsen J., Steiner J., Remillard R., Altamirano D., Gendreau K., Arzoumanian Z., 2019, The Astronomer’s Telegram, 12742 * Iwakiri et al. (2019) Iwakiri W., et al., 2019, The Astronomer’s Telegram, 12787 * Jithesh et al. (2019) Jithesh V., Maqbool B., Dewangan G. C., Misra R., 2019, The Astronomer’s Telegram, 12805 * Kajava et al. (2018) Kajava J. J. E., Motta S. E., Sánchez-Fernández C., Kuulkers E., 2018, A&A, 616, A129 * Klein-Wolt et al. (2002) Klein-Wolt M., Fender R. P., Pooley G. G., Belloni T., Migliari S., Morgan E. H., van der Klis M., 2002, MNRAS, 331, 745 * Koljonen & Hovatta (2021) Koljonen K. I. I., Hovatta T., 2021, arXiv e-prints, p. arXiv:2102.00693 * Koljonen & Tomsick (2020) Koljonen K. I. I., Tomsick J. A., 2020, A&A, 639, A13 * Koljonen et al. (2019) Koljonen K., Vera R., Lahteenmaki A., Tornikoski M., 2019, The Astronomer’s Telegram, 12839 * Krimm et al. (2013) Krimm H. A., et al., 2013, ApJS, 209, 14 * Marscher et al. (2002) Marscher A. P., Jorstad S. G., Gómez J.-L., Aller M. F., Teräsranta H., Lister M. L., Stirling A. M., 2002, Nature, 417, 625 * Matsuoka et al. (2009) Matsuoka M., et al., 2009, PASJ, 61, 999 * McMullin et al. (2007) McMullin J. P., Waters B., Schiebel D., Young W., Golap K., 2007, in Shaw R. A., Hill F., Bell D. J., eds, Astronomical Society of the Pacific Conference Series Vol. 376, Astronomical Data Analysis Software and Systems XVI. p. 127 * Merloni et al. (2003) Merloni A., Heinz S., di Matteo T., 2003, MNRAS, 345, 1057 * Miller-Jones et al. (2005) Miller-Jones J. C. A., McCormick D. G., Fender R. P., Spencer R. E., Muxlow T. W. B., Pooley G. G., 2005, MNRAS, 363, 867 * Miller-Jones et al. (2007) Miller-Jones J. C. A., Rupen M. P., Fender R. P., Rushton A., Pooley G. G., Spencer R. E., 2007, MNRAS, 375, 1087 * Miller-Jones et al. (2019) Miller-Jones J. C. A., et al., 2019, Nature, 569, 374 * Miller et al. (2016) Miller J. M., et al., 2016, ApJ, 821, L9 * Miller et al. (2019a) Miller J. M., Balakrishnan M., Reynolds M., Fabian A. C., Kaastra J., Kallman T., 2019a, The Astronomer’s Telegram, 12771 * Miller et al. (2019b) Miller J. M., Reynolds M. T., Zoghbi A., Fabian A. C., Kaastra J. S., Kallman T., Proga D., Raymond J., 2019b, The Astronomer’s Telegram, 12788 * Miller et al. (2020) Miller J. M., et al., 2020, ApJ, 904, 30 * Mirabel & Rodríguez (1994) Mirabel I. F., Rodríguez L. F., 1994, Nature, 371, 46 * Moran et al. (2001) Moran E. C., Kay L. E., Davis M., Filippenko A. V., Barth A. J., 2001, ApJ, 556, L75 * Motta et al. (2017a) Motta S. E., Kajava J. J. E., Sánchez-Fernández C., Giustini M., Kuulkers E., 2017a, MNRAS, 468, 981 * Motta et al. (2017b) Motta S. E., et al., 2017b, MNRAS, 471, 1797 * Motta et al. (2019) Motta S., Williams D., Fender R., Titterington D., Green D., Perrott Y., 2019, The Astronomer’s Telegram, 12773 * Muñoz-Darias et al. (2016) Muñoz-Darias T., et al., 2016, Nature, 534, 75 * Muñoz-Darias et al. (2020) Muñoz-Darias T., et al., 2020, ApJ, 893, L19 * Negoro et al. (2018) Negoro H., et al., 2018, The Astronomer’s Telegram, 11828 * Neilsen et al. (2019) Neilsen J., et al., 2019, The Astronomer’s Telegram, 12793 * Neilsen et al. (2020) Neilsen J., Homan J., Steiner J. F., Marcel G., Cackett E., Remillard R. A., Gendreau K., 2020, ApJ, 902, 152 * Offringa et al. (2012) Offringa A. R., van de Gronde J. J., Roerdink J. B. T. M., 2012, A&A, 539, A95 * Perrott et al. (2013) Perrott Y. C., et al., 2013, MNRAS, 429, 3330 * Plotkin et al. (2012) Plotkin R. M., Markoff S., Kelly B. C., Körding E., Anderson S. F., 2012, MNRAS, 419, 267 * Ponti et al. (2012) Ponti G., Fender R. P., Begelman M. C., Dunn R. J. H., Neilsen J., Coriat M., 2012, MNRAS, 422, L11 * Ponti et al. (2014) Ponti G., Muñoz-Darias T., Fender R. P., 2014, MNRAS, 444, 1829 * Pooley & Fender (1997) Pooley G. G., Fender R. P., 1997, MNRAS, 292, 925 * Pooley et al. (2010) Pooley D., Homan J., Heinke C., 2010, The Astronomer’s Telegram, 2974 * Ramos Almeida & Ricci (2017) Ramos Almeida C., Ricci C., 2017, Nature Astronomy, 1, 679 * Reid et al. (2014) Reid M. J., McClintock J. E., Steiner J. F., Steeghs D., Remillard R. A., Dhawan V., Narayan R., 2014, ApJ, 796, 2 * Revnivtsev et al. (2002) Revnivtsev M., Gilfanov M., Churazov E., Sunyaev R., 2002, A&A, 391, 1013 * Risaliti et al. (2005) Risaliti G., Elvis M., Fabbiano G., Baldi A., Zezas A., 2005, ApJ, 623, L93 * Rodríguez & Mirabel (1999) Rodríguez L. F., Mirabel I. F., 1999, ApJ, 511, 398 * Rodriguez et al. (2019) Rodriguez J., Chenevez J., Cangemi F., Corbel S., 2019, The Astronomer’s Telegram, 12755 * Rushton et al. (2007) Rushton A., et al., 2007, MNRAS, 374, L47 * Rushton et al. (2010) Rushton A., Spencer R., Fender R., Pooley G., 2010, A&A, 524, A29 * Segreto et al. (2010) Segreto A., Cusumano G., Ferrigno C., La Parola V., Mangano V., Mineo T., Romano P., 2010, A&A, 510, A47 * Soleri et al. (2008) Soleri P., Belloni T., Casella P., 2008, MNRAS, 383, 1089 * Spencer (1979) Spencer R. E., 1979, Nature, 282, 483 * Svinkin et al. (2019) Svinkin D., et al., 2019, The Astronomer’s Telegram, 12818 * Tetarenko et al. (2018) Tetarenko B. E., Lasota J. P., Heinke C. O., Dubus G., Sivakoff G. R., 2018, Nature, 554, 69 * Trushkin et al. (2019) Trushkin S. A., Nizhelskij N. A., Tsybulev P. G., Bursov N. N., Shevchenko A. V., 2019, The Astronomer’s Telegram, 12855 * Truss & Done (2006) Truss M., Done C., 2006, MNRAS, 368, L25 * Ursini et al. (2018) Ursini F., Bassani L., Panessa F., Bazzano A., Bird A. J., Malizia A., Ubertini P., 2018, MNRAS, 474, 5684 * Vishal et al. (2019) Vishal J., Banerjee Dipankar K. P., 2019, The Astronomer’s Telegram, 12806 * Wijnands et al. (2015) Wijnands R., Degenaar N., Armas Padilla M., Altamirano D., Cavecchi Y., Linares M., Bahramian A., Heinke C. O., 2015, MNRAS, 454, 1371 * Wilms et al. (2000) Wilms J., Allen A., McCray R., 2000, ApJ, 542, 914 * Zoghbi et al. (2016) Zoghbi A., et al., 2016, ApJ, 833, 165 * Życki et al. (1999) Życki P. T., Done C., Smith D. A., 1999, MNRAS, 309, 561 * van der Laan (1966) van der Laan H., 1966, Nature, 211, 1131
8k
arxiv_papers
2101.01188
# Rapid and Robust Parameter Inference for Binary Mergers Neil J. Cornish eXtreme Gravity Institute, Department of Physics, Montana State University, Bozeman, Montana 59717, USA ###### Abstract The detection rate for compact binary mergers has grown as the sensitivity of the global network of ground based gravitational wave detectors has improved, now reaching the stage where robust automation of the analyses is essential. Automated low-latency algorithms have been developed that send out alerts when candidate signals are detected. The alerts include sky maps to facilitate electromagnetic follow up observations, along with probabilities that the system might contain a neutron star, and hence be more likely to generate an electromagnetic counterpart. Data quality issues, such as loud noise transients (glitches), can adversely affect the low-latency algorithms, causing false alarms and throwing off parameter estimation. Here a new analysis method is presented that is robust against glitches, and capable of producing fully Bayesian parameter inference, including sky maps and mass estimates, in a matter of minutes. Key elements of the method are wavelet- based de-noising, penalized maximization of the likelihood during the initial search, rapid sky localization using pre-computed inner products, and heterodyned likelihoods for full Bayesian inference. ## I Introduction What started with a trickle in 2015 Abbott et al. (2016) has now turned into a veritable deluge Abbott et al. (2019, 2020a) of gravitational wave signals detected by the LIGO and Virgo instruments. Keeping up with the ever increasing event rate is challenging. While the searches for gravitational wave signals are now highly automated and capable of producing near-real time alerts Messick et al. (2017); Klimenko et al. (2016); Canton et al. (2020); Adams et al. (2016); Chu (2017), full parameter inference Veitch et al. (2015); Ashton et al. (2019) has lagged behind. This is in part due to the large computational cost of fully Bayesian parameter inference, and in part due to the challenge of working with data that needs to be carefully calibrated Viets et al. (2018); Collaboration et al. (2018) and cleaned of various noise contaminants Driggers et al. (2019); Davis et al. (2019); Vajente et al. (2020); Cornish et al. (2020). The instrument noise in the LIGO and Virgo data is, for the most part, well described as locally stationary and Gaussian Abbott et al. (2020b). Short duration signals, such as high mass binary black holes, are in the sensitive band of the detector for a second or less, and the odds of the signal encountering a noise transient is low. However, for longer duration signals, such as low mass black hole binaries or systems that include a neutron star, the signals are in the sensitive band of the detectors for tens of seconds or even minutes, and it is likely that the signal will encounter a noise transient. As the low-frequency sensitivity of the detectors improve, low mass systems are virtually guaranteed to encounter noise transients. The LIGO/Virgo analyses have been fortified against noise transients using a combination of vetoes Abbott et al. (2018), glitch robust search statistics Allen (2005), gating Usman et al. (2016) and glitch subtraction Cornish et al. (2020). Gating of glitches is used in the online searches, while glitch subtraction is performed prior to off-line parameter estimation. Here a proof of concept for a robust end-to-end low-latency Bayesian parameter estimation algorithm - QuickCBC \- is presented. The QuickCBC algorithm reads in calibrated strain data, performs robust on-source spectral estimation, executes a rapid search for compact binary coalescence (CBC) signals, uses wavelet de-noising to subtract any glitches from the search residuals, produces low-latency sky maps and initial parameter estimates, followed by full Bayesian parameter estimation. For binary black holes the entire process takes just minutes on a laptop; for binary neutron stars, initial sky maps and mass estimates are ready in minutes, and full results are ready in less than an hour. The QuickCBC code is open source https://github.com/eXtremeGravityInstitute/QuickCBC, and can be used to analyze public LIGO-Virgo data hosted by the Gravitational Wave Open Science Center https://www.gw-openscience.org. For testing purposes the algorithm was run in real time during the LIGO-Virgo O3 observing run, automatically generating results for triggers sent to the Gravitational-Wave Candidate Event Database. Existing algorithms can perform most of the individual steps in the QuickCBC algorithm. For example, the PyCBC search algorithm Usman et al. (2016) performs low-latency searches for CBC signals that incorporate glitch mitigation via a chi-squared test Allen (2005) and automatic gating of loud glitches Usman et al. (2016). The BayesWave algorithm Cornish and Littenberg (2015); Littenberg and Cornish (2015); Cornish et al. (2020), produces on- source spectral estimates and performs glitch subtraction Abbott et al. (2017); Pankow et al. (2018); Abbott et al. (2020a). The BayesStar algorithm Singer and Price (2016) produces low-latency sky maps to help guide the search for electromagnetic counterparts to binary mergers, and several rapid parameter estimation algorithms have been developed Pankow et al. (2015); George and Huerta (2018); Wysocki et al. (2019); Smith et al. (2020); Delaunoy et al. (2020). What is novel is that QuickCBC is a fully automated, end-to-end analysis algorithm that is robust against glitches, and able to produce reliable results in a matter of minutes. Many of the methods used by QuickCBC, such as wavelet de-noising Torrence and Compo (1998), banded likelihoods for glitch rejection, and heterodyned likelihoods for rapid inference Cornish (2010); Cornish and Shuman (2020), are new to LIGO-Virgo data analysis. Core elements of the QuickCBC algorithm have been merged with the BayesWave algorithm Chatziioannou et al. (2020). The key difference between the implementations is that the BayesWave variant Chatziioannou et al. (2021) jointly marginalizes over the CBC signal parameters, a model for the power spectral density (PSD) and a wavelet based model for noise transients. The QuickCBC algorithm uses a fixed PSD and a point estimate for any noise transients. The other key difference is speed: QuickCBC can be provide results with a latency of minutes, while the more refined BayesWave+CBC analysis Chatziioannou et al. (2021) takes hours. ## II Overview of the QuickCBC algorithm The QuickCBC algorithm works with short snippets of LIGO-Virgo data, typically 4 to 8 seconds in length when searching for binary black holes and 16 to 32 seconds in length when searching for binary neutron stars. The run time scales roughly linearly with the data volume. The first step is to produce estimates for the power spectral density in each detector. On-source spectral estimation, where the short segment of data to be searched is also used to estimate the power spectral density (PSD), can be thrown off by the presence of loud signals or loud glitches. To avoid such biases, QuickCBC uses an iterative approach that combines a running median estimate for the spectrum with spectral line identification and wavelet de- noising Torrence and Compo (1998). The de-noising removes signals and glitches, so only the spectral estimate from the first stage of the analysis is passes to the second stage. The second stage performs a rapid, network coherent search for CBC signals using a parallel tempered Markov Chain Monte Carlo algorithm (PTMCMC) Swendsen and Wang (1986) with a banded likelihood that is analytically maximized over amplitude, phase and arrival time. Only the intrinsic parameters of the signal - masses and spins - are explored by the PTMCMC. The banded likelihood automatically identifies and rejects frequency bands that are impacted by noise transients. The removal is done separately for each time delay, resulting in a robust time-frequency glitch rejection method that can detect signals in the presence of glitches. The third stage subtracts the best-fit CBC waveform from the data, and performs a second round of spectral estimation and wavelet de-noising. The de-noising produces a glitch model that is subtracted from the original data, while preserving any gravitational wave signals. The cleaned data and updated spectral estimates are used in the subsequent stages of the analysis. The fourth stage refines the estimates of the intrinsic parameters using a standard non-maximized and non-banded likelihood function. Consequently, the amplitude, phase and arrival time at each detector also have to be explored by the PTMCMC. The refined estimates for the intrinsic parameters are then passed to the fifth stage of the analysis, which uses a PTMCMC algorithm with an algebraic likelihood function to map out the extrinsic parameters of the source - sky location, luminosity distance, inclination and polarization angles while holding the intrinsic parameters fixed. Since the extrinsic parameters only impact the projection of the waveform onto the detectors, the inner products in the likelihood can be pre-computed, resulting in an algebraic likelihood function that can be evaluated in a fraction of a microsecond Cornish (2016). With the first five stages complete the algorithm will have produced a full three-dimensional sky map (RA, DEC and luminosity distance), along with estimates for the component masses and spins. All in about the time is took you to read this paragraph. The precise run time will depend on the duration and bandwidth of the signal and the speed and number of computations cores. For binary black hole systems at current LIGO/Virgo sensitivity it is usually sufficient to use 4 seconds of data sampled at 2048 Hz. For neutron star - black hole systems it is enough to use 8 to 16 seconds of data sampled at 2048 Hz, while for binary neutron star systems we need 16 to 32 seconds of data sampled at 4096 Hz. For a binary black hole system, running on a 2016 MacBook Pro laptop with a 2.9 GHz quad-core processor, it takes $\sim 60$ seconds for the PSD estimation and intrinsic parameter search, and an additional $\sim 30$ seconds to complete the extrinsic parameter search and produce sky maps. The cost of the intrinsic parameter search scales linearly with the data duration, while the cost of the extrinsic parameter search scales linearly with the sample rate. Thus, the intrinsic parameter search for neutron-star black hole binaries takes either two to four times longer than for binary black holes, but time to produce a sky map is the same. Sky maps can be produced in very low latency by skipping the intrinsic search and instead using the intrinsic parameters provided by the search pipelines (as is done by BayesStar Singer and Price (2016)). The run time to produce a sky map with QuickCBC is comparable to, or a little faster than, BayesStar. The main difference is that the QuickCBC maps are fully Bayesian, while BayesStar maps are only approximately so. Figure 1: Workflow diagram for the QuickCBC algorithm. The time domain data, $d(t)$, is read in, windowed, and Fourier transform to produce $\tilde{d}(f)$. An initial on-source PSD estimate $S(f)$ is produced using wavelet de-noising to remove glitch and signal power. Next the data is searched using a glitch- robust likelihood function that maximizes over extrinsic parameters and returns initial estimates for the intrinsic parameters $\vec{\eta}$. The PSD estimation is then repeated on the residual $d-h(\vec{\eta})$, and wavelet de- noising is used to fit and remove any glitches $\tilde{g}(f)$ that might be present in the data. The de-glitched data, $\tilde{d}(f)-\tilde{g}(f)$, is used to pre-compute various inner products for the projected network likelihood, allowing for a rapid mapping of the extrinsic parameters $\vec{\xi}$, such as the sky position and luminosity distance. The initial estimates for the full set of parameters $\vec{\theta}=\\{\vec{\eta},\vec{\xi}\\}$, are used along with the de-glitched data and refined PSD estimate to initialize the heterodyned likelihood function that is used by the PTMCMC algorithm to rapidly produce full posterior distribution. The final stage of the algorithm refines the initial parameter estimates using a PTMCMC algorithm and a fast, heterodyned likelihood function Cornish (2010); Cornish and Shuman (2020). The heterodyned likelihood offers significant speed advantages, especially for long duration signals such as binary neutron star inspirals. The run-time for the final PTMCMC stage scales linearly with the sample rate and is independent of the observation time. On the same laptop computer described earlier, the PTMCMC stage takes four minutes for black hole binaries and neutron star - black hole binaries, and eight minutes for neutron star binaries. These run-times are several orders magnitude faster than the hours or days it takes for LALinference Veitch et al. (2015); Ashton et al. (2019) to produce results. ### II.1 Spectral Estimation and Wavelet De-noising The QuickCBC algorithm is design to work with short stretches of data, typically between $T_{\rm obs}=4$ and $T_{\rm obs}=32$ seconds in duration. Traditional spectral estimation techniques, such as Welch averaging, can not be used on short data segments like these. The advantages of working with short data segments are speed and robustness against non-stationary drifts in the power spectrum. The disadvantages are low spectral resolution and possible biases due to the presence of loud signals or glitches. To guard against such biases an iterative wavelet de-noising approach is used to remove non-Gaussian features from the data. Figure 2: Time-frequency maps illustrating the spectral estimation and de- noising procedure applied to four seconds of LIGO Hanford data surrounding GPS time 1126259462. The initial spectral estimation and whitening (upper panel) is impacted by the loud signal from GW150914. Wavelet de-noising is used to remove the excess power, then the spectral estimation is repeated. The original data is re-whitened (middle panel) ready for the next iteration. The process is iterated until the excess SNR plateaus. The bottom panel shows the de-noised data used to produce the final spectral estimate. The iterative spectral estimation procedure proceeds as follows: A Tukey window is applied to the data to limit spectral leakage. A FFT is then used to produce a periodogram $S_{p}(f)$. A running median of width $\Delta f$ is used to smooth the periodogram. The width of the smoothing window $\Delta f$ is chosen to strike a balance between following the slope of the spectrum (small $\Delta f$’s follow the slope), and ignoring sharp spectral lines (large $\Delta f$’s are robust against lines). More accurately, it is the number of Fourier samples in the smoothing window that is critical, so for longer observation times smaller windows can be used. For the shortest 4 second segments the default width is $\Delta f=16$ Hz. The smoothed spectral estimate $S_{s}(f)$ is used to identify spectral lines, which are found by taking the ratio $R(f)=S_{p}(f)/S_{s}(f)$, with lines defined as regions where this ratio exceeds $R_{*}=10$. The full spectral estimate is then given by $S(f)=\left\\{\begin{array}[]{lr}S_{s}(f),&\text{for }R(f)\leq R_{*}\\\ S_{p}(f),&\text{for }R(f)>R_{*}\end{array}\right.\,.$ (1) The initial spectral estimate is used to whiten the data: $\tilde{d}(f)\rightarrow\tilde{d}_{w}(f)=\tilde{d}(f)/\sqrt{S(f)}$. The whitened data is wavelet transformed using an over-complete collection of Morlet-Gabor continuous wavelets (see the top panel of Figure 2). A wavelet de-noising procedure Torrence and Compo (1998) is then used to remove any non- Gaussian features from the data. Wavelet de-noising is basically a time- frequency thresholding technique. For stationary Gaussian noise, the wavelet power spectrum, $S_{nm}$, should follow a chi-squared distribution with two degrees of freedom. Wavelet pixels with power above a certain threshold are identified, then an inverse wavelet transform of these pixels is used to produce a whitened time domain reconstruction of the excess power. The reconstructed feature is re-colored using the smooth component of the power spectrum $S_{s}(f)$ and subtracted from the original time domain data. The thresholding procedure starts by identifying pixels with $S_{nm}>S_{0}$, then surrounding pixels with $S_{n\pm 1m\pm 1}>S_{1}$ are also flagged, with the goal of identifying clusters of excess power. The standard threshold values are $S_{0}=10$ and $S_{1}=6$. The power spectral estimation is repeated using the de-noised data. The updated power spectrum is then used to whiten the original data, and the wavelet de-noising procedure is repeated. The entire procedure is iterated until the signal-to-noise of the non-Gaussian excess plateaus. This typically takes just one or two iterations, but data with very loud glitches may require as many as five or six iterations for the procedure to converge. Figure 2 illustrates the spectral estimation and de-noising procedure using 4 seconds of data surrounding GW150914. The initial spectral estimate used to whiten the data (top panel) overestimates the height of the spectrum through the band between $\sim 40{\rm Hz}\rightarrow 120{\rm Hz}$ due to the loud gravitational wave signal. The data in the middle panel is whitened using the updated spectrum found after the initial round of wavelet de-noising. More features are now visible in the mid-band frequencies. In this example the procedure converged after two iterations. The lower panel shows the de-noised data that was used to produce the final spectral estimate. Figure 3 shows the whitened time domain reconstruction of the feature that was removed by the de- noising process. In this instance the feature is the gravitational wave signal from a binary black hole merger. It would be a bad idea to use the wavelet de- noised data from the spectral estimation procedure for subsequent stages in the analysis! Instead, just the spectral estimate is used. Figure 3: The non-Gaussian feature removed by wavelet de-noising during the iterative spectral estimation procedure applied to four seconds of LIGO Hanford data centered on GPS time 1126259462. In this instance the non- Gaussian feature is the gravitational wave signal from the binary black hole merger GW150914. The final stage of the QuickCBC spectral estimation procedure is to fit a fixed dimension version of the BayesWave trans-dimensional spectral model Cornish and Littenberg (2015); Littenberg and Cornish (2015) to the glitch- subtracted data. The spectral model includes a smooth component described by a cubic spline, and line features described by a Lorentzian line model. The running median is used to initialize the spline model. The spacing of the spline points is determined by comparing two running averages of the running median, one with a window twice as wide as the other. The usual choice is to use 4 Hz and an 8 Hz window. The spline control points are spaced more closely in regions where the two averages diverge, and spaced further apart in regions where the two averages converge. The minimum spacing of the spline control points is set at 4 Hz and the maximum spacing is set at 32 Hz. The threshold on the ratio of the the two averages is set at 20%. The Lorentzian line model is initialized using the outliers from the running median to set the initial location, amplitude and width of the lines. A simple fixed dimension MCMC is then used to refine the model parameters. To control unphysical oscillations in the spline model, a prior is used that penalizes points with large second derivatives. Figure 4 compares the QuickCBC spectral estimate to the estimate from the BayesWave algorithm Cornish and Littenberg (2015); Littenberg and Cornish (2015). The QuickCBC estimate agrees very well with the BayesWave estimate. The key difference is that the QuickCBC estimate is produced in seconds, while the more refined BayesWave estimate takes tens of minutes or longer to produce. Figure 4: A comparison of power spectral density estimates using four seconds of LIGO Hanford data surrounding GW150914 centered on GPS time 1126259462. The low-latency QuickCBC algorithm produces a good approximation to the reference BayesWave estimate. A Welch average using 2048 seconds of data surrounding the event is shown for comparison. ### II.2 Parallel Tempered Markov Chain Monte Carlo The QuickCBC algorithm uses a Parallel Tempered Markov Chain Monte Carlo (PTMCMC) algorithm Swendsen and Wang (1986); Littenberg and Cornish (2009); Vousden et al. (2015) for both the initial search, the fast sky mapping and the full parameter inference. The implementation varies slightly between the different stages, mostly in terms of the likelihood function used, but the central engine is the same. The PTMCMC algorithm uses a collection of chains that explore likelihoods that are scaled by “inverse temperatures” $\beta_{i}$: $\ln L(\beta_{i},\vec{\theta})=\beta_{i}\ln L(\vec{\theta})$. Chains with $\beta_{i}=1$ explore the target posterior, while chains with $\beta_{i}>1$ range more widely, and help chains to escape from local maxima. Chains with $\beta_{i}<1$ can be used to help lock on to weak signals. The temperature ladder is set so that there are multiple “cold” chains: $\beta_{i}=1$ for $i=[1,N_{\rm c}]$ that interact with $N_{h}$ “hot” chains on a geometrically spaced temperature ladder: $\beta_{k}=\alpha^{k}$ for $k=[1,N_{\rm h}]$. The choice of increment $\alpha$ and the number of hot chains are tunable parameters. Setting $\alpha$ too large will result in poor exchange between chains. Setting $\alpha$ too small will require a prohibitive number of chains to reach the desired maximum temperature. A good rule-of-thumb for the maximum temperature is that the effective signal-to-noise ratio for the hottest chain, ${\rm SNR}_{N_{h}}={\rm SNR}\sqrt{\beta_{N_{k}}}$ should be of order 4-5. For typical LIGO-Virgo signals with ${\rm SNR}\sim 10\rightarrow 20$ we need $\beta_{N_{k}}\sim 1/4\rightarrow 1/16$. Using a spacing of $\alpha=0.8$ requires of order $N_{h}\sim 6\rightarrow 12$ hot chains to reach the desired maximum temperature. These rule-of-thumb settings for the PTMCMC temperature ladder have been found to work in practice, but the efficiency could be improved by using dynamic temperature spacing Vousden et al. (2015). When loud signals (${\rm SNR}>30$) are detected, it may be necessary to start a new analysis with additional chains to have a high enough maximum temperature. Each chain is updated using a mixture of proposal distributions. The standard mix includes draws from the prior distribution, jumps along eigenvectors of the Fisher information matrix, differential evolution, Gaussian jumps along each intrinsic parameter direction and a dedicated extrinsic parameter proposal that draws new sky locations that maintain the time delay between a randomly selected pair of detectors, while analytically adjusting the other extrinsic parameters to keep the detector frame waveforms unchanged. The extrinsic proposal is described in detail in section IVa of Ref. Cornish et al. (2020). The Fisher information matrix proposal is based on a quadratic expansion of the log likelihood: $\displaystyle\Gamma_{ij}(\vec{\theta})=-\partial_{i}\partial_{j}\ln L$ $\displaystyle\quad=4\sum_{ab}\int\frac{{\cal A}^{ab}_{,i}{\cal A}^{ab}_{,j}+{\cal A}_{ab}^{2}\Phi^{ab}_{,i}\Phi^{ab}_{,j}}{S^{a}(f)}\,df\,.$ (2) Here the derivatives are taken with respect to the waveform parameters $\theta^{i}$ centered on some reference value $\vec{\theta}$. The sum is over the detectors, $a$, in the network, and the harmonics, $b$, of the gravitational wave signal: $h_{a}(f)=\sum_{b}{\cal A}_{ab}(f)e^{i\Phi^{ab}(f)}\,.$ (3) The reference value $\vec{\theta}$ is updated to the current value of the chain every few hundred iterations and the Fisher matrix is recomputed at the new location. The Fisher matrix proposal employs the eigenvectors ${\bf v}_{(k)}$ and eigenvalues $\lambda_{k}$, found by solving the linear system $\Gamma_{ij}v^{j}_{(k)}=\lambda_{k}v^{i}_{(k)}\,.$ (4) Jumps from the current location ${\bf x}$ to candidate location ${\bf y}$ are proposed by first randomly selecting an eigen-direction $p$, and setting ${\bf y}={\bf x}+\frac{\gamma}{\sqrt{\lambda_{p}}}{\bf v}_{(p)}\,,$ (5) where $\gamma\sim{\cal N}(0,1)$ is a zero mean, unit variance Gaussian deviate. The proposal densities for this jump cancel in the Metropolis- Hastings ratio since the Fisher matrix is held fixed (aside from occasional infrequent updates). Jumping along eigen-directions is more robust that drawing from the full Fisher matrix, as the matrices are often poorly- conditioned. The poor conditioning typically only impacts one or two eigen- directions, and still allows for good acceptance of jumps along the other eigen-directions. Small Gaussian jumps along individual parameter directions are included in the proposal mix to help cover directions that might not be explored well by the Fisher matrix jumps. Differential evolution (DE) proposals Ter Braak (2006) are particularly good at exploring degenerate directions in parameter space - the very same directions that cause the Fisher matrix to become ill-conditioned. The variant of differential evolution used by QuickCBC works as follows: A history array of past samples, ${\bf z}$ is collected for each temperature level (with multiple copies for the cold chains). The array is initialized with draws from the prior. Samples are added to the history array after every $\sim 10$ iterations. A counter $j$ keeps track of how many samples have been added. The new sample added is added to the array at index $j({\rm mod}N_{H})$. That is, when $j$ reaches $N_{H}$ the first entry in the array gets replaced and so on. The DE proposal is made as follows: Two samples, $k,l$, are drawn from the history array and used to propose a new location ${\bf y}=\bf{x}+\gamma(\bf{z}_{k}-\bf{z}_{l})\,.$ (6) Here $\gamma$ is drawn from a Gaussian of width $2.38/\sqrt{2d}$ for 90% of the DE updates, where $d$ is number of parameters, and set to $\gamma=1$ for the rest. The proposal is symmetric, so the proposal densities cancel in the Metropolis-Hastings ratio. The Gaussian DE jumps are good for exploring local correlations, while the $\gamma=1$ DE jumps allow the chains to move between discrete modes of the posterior. The QuickCBC sampler is currently limited to using waveform templates that describe non-precessing, quasi-circular binaries. These templates can be parameterized in terms of four intrinsic parameters and seven extrinsic parameters. The intrinsic parameters are the individual mass $m_{1},m_{2}$ and the aligned dimensionless spins $\chi_{1},\chi_{2}$. Here $\chi=\vec{\chi}\cdot\hat{L}$, where $\vec{\chi}=\vec{S}/m^{2}$ is the dimensionless spin vector and $\hat{L}$ is a unit vector aligned with the orbital angular momentum. The seven extrinsic parameters are the sky location RA, DEC = $(\alpha,\beta)$, luminosity distance $D_{L}$, polarization and inclination $(\psi,\iota)$, merger time and merger phase $(t_{c},\phi_{c})$. The QuickCBC sampler uses the modified collection of parameters $\vec{\eta}\rightarrow\\{\ln{\cal M},\ln M,\chi_{1},\chi_{2}\\}$ and $\vec{\xi}\rightarrow\\{\alpha,\sin\beta,\ln D_{L},\psi,\cos\iota,\phi_{c},t_{c},\\}$, where $M=m_{1}+m_{2}$ is the total mass and ${\cal M}=(m_{1}m_{2})^{3/5}/M^{1/5}$ is the chirp mass. The priors are taken to be uniform in all the parameters save for ${\cal M},M,D_{L}$. For the masses the priors are uniform in $m_{1},m_{2}$, which can be enforced using the Jacobian factor $J_{M}=Mm_{1}m_{2}/\sqrt{M^{2}-4m_{1}m_{2}}$. For the distance, the prior is taken to be uniform in luminosity distance volume, which can be enforced using the Jacobian factor $J_{D}=D_{L}^{3}$. Some waveform models, such as the IMRPhenomD model Husa et al. (2016); Khan et al. (2016) used to produce the plots in this paper, are only considered to be reliable for a sub-set of mass ratios and spins. To account for this, the prior ranges for the IMRPhenomD analyses are restricted such that $m_{1}/m_{2}<18$ and $|\chi|<\chi_{\rm max}=0.85$. The default prior on the spins is uniform in the aligned spin component. To facilitate comparison with the IMRPhenomPv2 precessing model Hannam et al. (2014), which uses a uniform- in-direction spin prior, a second spin prior option can be selected that is uniform in the aligned spin component for isotopically distributed spins, namely, $p(\chi)=\ln(\chi_{\rm max}/|\chi|)/(2\chi_{\rm max})$. ### II.3 Glitch Robust Coherent Search The QuickCBC algorithm can be used to search for CBC signals in segments of LIGO/Virgo data. The standard usage is to follow-up triggers from template- bank based CBC search pipelines, but any valid GPS time will do. QuickCBC executes a stochastic search using a PTMCMC algorithm and a glitch-robust maximized likelihood function. The search is limited to the dominant waveform harmonic for non-precessing, quasi-circular binaries. As such, the search may fail to detect systems with significant contributions from higher modes, strongly precessing systems, or highly eccentric systems. Extending the search to include higher modes is straightforward. Including precession and eccentricity is far more challenging. The dominant waveform harmonic for non-precessing, quasi-circular binaries has polarization states related: $h_{\times}(f)={\rm i}\epsilon h_{+}(f)$ where $\epsilon=-\frac{2\cos\iota}{(1+\cos^{2}\iota)}\,.$ (7) The detector response can be written as: $h_{a}(\vec{\theta},f)=h_{+}(\vec{\eta},f)\frac{D_{*}}{D_{L}}\left(F^{a}_{+}+{\rm i}\epsilon F^{a}_{\times}\right){\rm e}^{2\pi{\rm i}f\Delta t_{a}}{\rm e}^{{\rm i}\phi_{c}}$ (8) where $a$ labels the detector, $F^{a}_{+}(\alpha,\beta,\psi)$ and $F^{a}_{\times}(\alpha,\beta,\psi)$ are the antenna response patterns, $\Delta t_{a}$ is the arrival time relative to the geocenter time, $\phi_{c}$ is the merger phase, and $D_{L}$ is the luminosity distance. The reference geocenter waveform $h_{+}(\vec{\eta},f)$ is generated using an arbitrary fiducial luminosity distance $D_{*}$, with merger time and phase set equal to zero. As such, the reference waveform only depends on the four intrinsic parameters $\vec{\eta}$. Defining $F_{a}=\frac{D_{*}}{D_{L}}\left({F^{a}_{+}}^{2}+\epsilon^{2}{F^{a}_{\times}}^{2}\right)^{1/2}$ (9) and $\lambda_{a}={\rm atan}(\epsilon F^{a}_{\times}/F^{a}_{+})+\phi_{c}$ (10) the response can be written as $h_{a}(\vec{\theta},f)=h_{+}(\vec{\eta},f)F_{a}{\rm e}^{{\rm i}\lambda_{a}}{\rm e}^{2\pi{\rm i}f\Delta t_{a}}\,.$ (11) We see that the waveforms in each detector are identical up to an overall amplitude scaling, time shift and phase shift. Denoting the data in detector $a$ as $d_{a}(f)$, the Gaussian log likelihood is given by $\ln L_{a}=(d_{a}|h_{a})-\frac{1}{2}(h_{a}|h_{a})-\frac{1}{2}(d_{a}|d_{a})\,.$ (12) Here $(x|y)$ denotes the noise weighted inner product $(x|y)=2\int\frac{x^{*}y+y^{*}x}{S(f)}\,df\,,$ (13) where $S(f)$ is the power spectral density of the noise. The network log likelihood is found by summing the individual contributions: $\ln L=\sum_{a}\ln L_{a}$. #### II.3.1 Maximized likelihood During the search stage, the power spectral density is held fixed and the $(d_{a}|d_{a})$ term is a constant that can be ignored. Standard tricks are used to maximize over the amplitude, phase and arrival time of the waveforms. Writing the signal in terms of unit normalized sine and cosine quadratures, $(h_{a,s}|h_{a,s})=(h_{a,c}|h_{a,c})=1$, $(h_{a,s}|h_{a,c})=0$: $h_{a}(f)=A_{a}\left(h_{a,s}(f)\sin\phi_{a}+h_{a,c}(f)\cos\phi_{a}\right)\,,$ (14) the log likelihood (dropping the $(d_{a}|d_{a})$ constant term) becomes $\ln L_{a}=A_{a}\rho_{a}(t_{a},\vec{\eta})\cos(\phi_{a}-\varphi_{a}(t_{a},\vec{\eta}))-\frac{1}{2}A_{a}^{2}\,,$ (15) where $\vec{\eta}$ are the intrinsic parameters of the source, and $\rho_{a}(t_{a},\vec{\eta})=|z_{a}(t_{a},\vec{\eta})|$, $\varphi_{a}(t_{a},\vec{\eta})={\rm arg}\\{z_{a}(t_{a},\vec{\eta})\\}$ where $z_{a}(t_{a},\vec{\eta})=4\int\frac{d_{a}(f)h_{a,c}^{*}(f,\vec{\eta})}{S_{n}(f)}e^{2\pi{\rm i}ft_{a}}\,.$ (16) The likelihood is maximized with respect to amplitude and phase by setting $A_{a}=\rho_{a}$ and $\phi_{a}=\varphi_{a}$: $\ln L_{a,{\rm max}}(t_{a},\vec{\eta})=\frac{1}{2}\rho^{2}_{a}(t_{a},\vec{\eta})\,.$ (17) The complex SNR time series $z_{a}(t_{a},\vec{\eta})$ can be computed using an inverse fast Fourier transform. The likelihood can then be maximized with respect to the time offset $t_{a}$ by sorting the resulting time series $\rho_{a}(t_{a},\vec{\eta})$. The network likelihood $\ln L_{\rm max}(\\{t_{i}\\},\vec{\eta})=\frac{1}{2}\sum_{a}\rho^{2}_{a}(t_{a},\vec{\eta})$ (18) can be maximized with respect to the arrival times in each detector, $\\{t_{i}\\}$, subject to the constraint that the time differences $\Delta t_{ij}=|t_{i}-t_{j}|$ are less than the light travel times between the detector sites. The maximization is done pair-wise between detectors, starting with a reference detector. For networks with three or more detectors the pair- wise approach can yield collections of time delays that do not correspond to any physical sky location. Similarly, the relative phases may not correspond to any physical sky location, inclination and polarization angle. In most cases this is not a problem as the extrinsic parameters get refined in the subsequent sky-mapping stage of the analysis. When glitches are present in the data, the log likelihood times series in each detector, $\ln L_{a,{\rm max}}(t_{a},\vec{\eta})$, may have multiple distinct maxima. Some of these maxima will be associated with glitches and some will be associated with the signal. To account for this possibility, all maxima that are at least 50 ms apart are recored for each detector before applying the network time-delay restriction. The algorithm can return multiple solutions, each with different arrival times, amplitudes and phases. A glitch rejection step is then applied to each candidate solution before arriving at a unique maximum likelihood solution. #### II.3.2 Banded Glitch Rejection CBC searches use variants of the $\rho$ search statistic, defined for data $d$ and templates $h$ as $\rho=\frac{(d|h)}{(h|h)^{1/2}}\,.$ (19) When a glitch is present in the data, $d=n+g$, the template can ring-off against the glitch. Usually this occurs across a narrow band of frequencies. Looking at how $\rho$ accumulates with frequency can be used to detect glitches. Rather than steadily accumulating, $\rho$ gets a big boost in the frequency band where the signal crosses a glitch. Motivated by these considerations, a chi-squared test for glitch rejection has been incorporated into CBC searches Allen (2005). The statistic uses frequency bands on varying width, with the width chosen so that the template has equal SNR=$\sqrt{(h|h)}$ in each band. Figure 5: The banded $\bar{\rho}$ statistic as a function of central frequency for a template with chirp mass ${\cal M}=1.197M_{\odot}$, total mass $M=2.8M_{\odot}$ and merger time $t_{c}=1187008882.4486$ GPS seconds. The $\bar{\rho}$ statistic successfully identifies the frequency bands where the signal track crosses a loud noise transient in the LIGO Livingston detector. Here we introduce a variant of this approach with fixed-width frequency bands. Defining the ${\bar{\rho}}$ statistic: ${\bar{\rho}}(f,\Delta f)=\frac{(d-h|h)_{{\rm max}\phi_{0}}}{(h|h)^{1/2}}\,$ (20) where the noise-weighted inner products are computed across a frequency band of width $\Delta f$, centered at frequency $f$. The inner product of the residual, $d-h$, and the template, $h$, is analytically maximized with respect to the overall phase in that band using sine/cosine quadratures. In pure Gaussian noise, $d=n$, we have ${\rm E}[{\bar{\rho}}]=1-{\rm SNR},\quad{\rm Var}[{\bar{\rho}}]=1.$ (21) When the template matches a signal in the data, $d=n+h$, we have ${\rm E}[{\bar{\rho}}]=1,\quad{\rm Var}[{\bar{\rho}}]=1.$ (22) When a glitch is present in the data, $d=n+g$ the template rings-off against a glitch and ${\bar{\rho}}(f,\Delta f)$ becomes large and positive. Frequency bands where ${\bar{\rho}}(f,\Delta f)>4$ are excluded from the likelihood calculation. The amplitude and phase maximization are repeated for the full template with any glitch-impacted bands removed. The banded glitch rejection is applied to the collection of candidate solutions from the original likelihood maximization step. The solution with the largest banded likelihood is returned and used by the PTMCMC search algorithm. ### II.4 Glitch Removal The PTMCMC search using the banded maximum likelihood function returns an initial estimate for the extrinsic parameters of the signal, along with the arrival times, amplitudes and phases in each detector. This solution is then used to subtract the CBC signal from the data. The residual is then processed through the same spectral estimation procedure that was applied to the original data. Figure 6 shows the reconstructed glitch model for residual in the LIGO Livingston detector roughly a second before the merger of binary neutron star GW170817. Figure 6: The whitened glitch model in LIGO Livingston data centered at GPS time 1187008882. The glitch reconstruction was performed after a low-latency point estimate for the GW170817 signal, which is coincident with the glitch, was subtracted from the data. Figure 7 shows time-frequency maps of the LIGO Livingston data surrounding the GW170817 event. The upper panel shows the time-frequency track for the point estimate of the signal that was subtracted from the data prior to the second round of spectral estimation and wavelet de-noising. The lower panel shows the whitened data after the noise transient has been removed. All subsequent stages of the analysis are performed using the glitch subtracted data. Figure 7: Time-frequency maps of the LIGO Livingston data centered at GPS time 1187008882. The upper panel shows the raw whitened data. The black line indicates the reconstructed time-frequency track for GW170817 found using the banded maximum likelihood. The best-fit signal is subtracted from the data, then wavelet de-noising is used to identify any noise transients in the data. The noise transients are removed from the original data in preparation for more refined parameter estimation (lower panel). ### II.5 Low Latency Sky Mapping The search phase delivers an estimate for the intrinsic parameters, in addition to the amplitudes, phases and arrival times in each detector. The next step is to find extrinsic parameters that are consistent with the waveforms seen in each detector. With three or more detectors the problem of solving for the extrinsic parameters $\vec{\xi}$ given the relative amplitudes, arrival times and phases is over-constrained, and often ill-posed due to noise. Rather than trying to solve the problem analytically, we once again resort to a Monte Carlo approach, this time aided by an extremely cheap- to-compute likelihood function. When the intrinsic parameters $\vec{\eta}$ are held fixed, the response in each detector can be found by applying projections to a reference geocenter waveform that amount to amplitude re-scalings, phase shifts and time shifts (see equation 11). Taking a reference waveform $\hat{h}_{+}$, scaled to unity at some reference distance $D_{a}$, the log likelihood can be written as $\ln L=\sum_{a}\frac{F_{a}D_{a}}{D_{L}}\left(e^{-{\rm i}\lambda_{a}}C(t_{a})+e^{{\rm i}\lambda_{a}}C^{*}(t_{a})\right)-\frac{F_{a}^{2}D_{a}^{2}}{2D^{2}_{L}}\,.$ (23) where $C_{a}(t_{a})=\int\frac{d_{a}\hat{h}_{+}^{*}}{S_{n}(f)}\,{\rm e}^{2\pi{\rm i}ft_{a}}df\,,$ (24) can be evaluated using a Fast Fourier transform (FFT). In order to have sufficient time resolution (typically a tenth of a millisecond or less), it is necessary to zero-pad the frequency series prior to performing the FFT. Nonetheless, the computational cost is small. Putting everything together we have $\ln L(\vec{\xi},\vec{\theta})=\sum_{a}2F_{a}|C_{a}(t_{a})|\cos(\lambda_{a}-{\rm arg}\\{C_{a}(t_{a})\\})-\frac{F_{a}^{2}D_{a}^{2}}{2D^{2}_{L}}\,.$ (25) The quantities $D_{a}$ and $C_{a}(t_{a})$ can be pre-computed and stored for any choice of intrinsic parameters $\vec{\eta}$. The likelihood for any set of extrinsic parameters can then be found at the cost of a few multiplications and a cosine, allowing for millions of likelihood evaluations per second. Similar techniques can be used to accelerate the calculation of the extrinsic Fisher matrix, $\Gamma^{E}_{ij}=(\partial_{\xi^{i}}h|\partial_{\xi^{j}}h)$: $\displaystyle\Gamma^{E}_{ij}=\sum_{a}\left[(F_{a,i}F_{a,j}+F_{a}^{2}\lambda_{a,i}\lambda_{a,j})H_{0a}\right.$ $\displaystyle\quad+\delta_{it_{c}}(2\pi F_{a}^{2}\lambda_{,i}t_{a,j}\lambda_{,j}t_{a,i})H_{1a}$ $\displaystyle\quad\left.+\delta_{it_{c}}\delta_{jt_{c}}(4\pi^{2}F_{a}^{2}t_{a,i}t_{a,j})H_{2a}\right]$ (26) where $H_{ka}=(f^{k}h_{+}|h_{+})_{a}$. The inner products $H_{ka}$ are computed once and stored. The luminosity distance can be extracted from the reference waveform by re-scaling the response function such that $F_{a}\rightarrow(D_{a}/D_{L})F_{a}$, with $D_{a}$ scaled such that $H_{0a}=1$. The derivatives of arrival time at each detector, $t_{a,i}$, are non-vanishing for $\\{\alpha,\beta,t_{c}\\}$. The phase derivatives are non- vanishing for $\\{\alpha,\beta,\psi,\iota,\phi_{c}\\}$: $\lambda_{a,i}=\delta_{i\phi_{c}}+\frac{(F^{a}_{+}(\epsilon F^{a}_{\times})_{,i}-F^{a}_{+,i}(\epsilon F^{a}_{\times}))}{F_{a}^{2}}\,.$ (27) while the derivatives of the re-scaled antenna pattern are non-vanishing for $\\{\alpha,\beta,\psi,\iota,D_{L}\\}$: $F_{a,i}=-\frac{F_{a}}{D_{L}}\delta_{iD_{L}}+\frac{(F^{a}_{+}F^{a}_{+,i}+(\epsilon F^{a}_{\times})(\epsilon F^{a}_{\times})_{,i})}{F_{a}}\,.$ (28) Figure 8: Low latency sky map for GW170817. The blue star indicates the location of the electromagnetic counterpart to the BNS merger. A PTMCMC algorithm is used to explore the extrinsic parameters. The initial “burn-in” phase can be accelerated by randomly trying out sky locations until one is found that yields the correct time delays between the detectors to within some pre-defined tolerance. However the likelihood evaluation is so fast that such acceleration is not necessary, and the chains can simply be initialized at some random draw from the prior distribution. During the burn- in phase the extrinsic PTMCMC uses the same number of chains and the same temperature ladder as the intrinsic PTMCMC from the coherent search. Each chain inherits the intrinsic parameters from the search phase. A mixture of proposal distributions are employed: Jumps along eigenvectors of the extrinsic Fisher matrix $\Gamma^{E}_{ij}$; small Gaussian jumps along each extrinsic parameter direction; and deterministic jumps along sky rings that preserve the time delay between a randomly selected pair of detectors Cornish et al. (2020). Samples from the chains with unit inverse temperature are used to produce low latency sky maps such as the example shown in Figure 8. ### II.6 CBC Parameter Estimation The rapid coherent search and low latency sky mapping yield a good starting solution for a Bayesian exploration of source parameters. The inference is performed using the PTMCMC sampler Fisher matrix proposals, differential evolution, deterministic sky ring jumps in the extrinsic parameters and small Gaussian jumps along each parameter direction. The analysis is sped up by using a heteroydned likelihood function Cornish (2010); Cornish and Shuman (2020). The heteroydyned likelihood uses a reference waveform $\bar{h}$, in this case the maximum likelihood solution from the search, to re-write the log likelihood as $\ln L_{a}=(\bar{r}_{a}|\bar{h}_{a})+\frac{1}{2}(\bar{h}_{a}|\bar{h}_{a})-(\bar{r}_{a}|\Delta\bar{h}_{a})-\frac{1}{2}(\Delta\bar{h}_{a}|\Delta\bar{h}_{a}).$ (29) where $\bar{r}_{a}=d_{a}-\bar{h}_{a}$ and $\Delta h_{a}=\bar{h}_{a}-h_{a}$. The $(\bar{r}_{a}|\bar{h}_{a})$ and $(\bar{h}_{a}|\bar{h}_{a})$ terms in the likelihood can be computed once and stored. The $(\Delta h_{a}|\Delta h_{a})$ term can be written as $4\int\frac{\Delta h\Delta h^{*}}{S(f)}df=4\int\frac{\bar{{\cal A}}^{2}+{\cal A}^{2}-2\bar{{\cal A}}{\cal A}\cos\Delta\Phi}{S(f)}df$ (30) where we have used $h={\cal A}(f)e^{i\Phi(f)}$, $\bar{h}=\bar{\cal A}(f)e^{i\bar{\Phi}(f)}$ and $\Delta\Phi(f)=\bar{\Phi}(f)-\Phi(f)$. The phase difference between the reference waveform $\bar{h}$, and waveforms drawn from the posterior distribution $h$ will always be small, so using a reference waveform effectively heterodynes the numerator of equation (30), rending it a slowly varying function of frequency. The $(\bar{r}_{a}|\Delta h_{a})$ in the likelihood can be written as $(\bar{r}_{a}|\Delta h_{a})=4\int(\Re\bar{r}_{w}\Re\Delta h_{w}+\Im\bar{r}_{w}\Im\Delta h_{w})df\,,$ (31) where $\bar{r}_{w}=\frac{\bar{r}\,e^{-i\bar{\Phi}(f)}S^{1/2}_{s}(f)}{S(f)}$ (32) is the whitened reference residual heterodyned by the reference phase and $\Delta h_{w}=\frac{\left(\bar{\cal A}(f)-{\cal A}(f)e^{-i\Delta\Phi(f)}\right)}{S^{1/2}_{s}(f)}$ (33) is the heterodyned difference in the waveforms, whitened by the smooth component of the amplitude spectral density. The integrands in (30) and (31) can be written as products of a slowly varying function $s(f)$ and a rapidly varying function $r(f)$. In equation (30) the numerator is a slowly varying function, while the inverse of the full power spectral density is a rapidly varying function due to the spectral lines. In equation (31) the real and imaginary parts of $\Delta h_{w}$ are slowly varying while the real and imaginary parts of the heterodyned residual $\bar{r}_{w}$ are rapidly varying. The integrals (in practice sums over frequency) can be evaluated accurately and rapidly using a Legendre polynomial expansion. The sum over frequency is broken up into bands of width $\Delta f$ and the discrete Legendre polynomial expansions of the rapidly varying function $r(f)$ are computed once and stored for each frequency band. Each frequency band covers $M=T_{\rm obs}\Delta f$ frequencies, and the number of bands is $K=2f_{\rm ring}/\Delta f$, where $f_{\rm ring}$ is the ringdown frequency of the reference waveform. The discrete values of the rapidly varying function in each band can be expanded in a sum of discrete Legendre polynomials: $r_{k}=\sum_{\ell=0}^{M}\rho_{\ell}P_{\ell}(k)$ (34) where $P_{\ell}(k)$ are the discrete Legendre polynomials of order $\ell$ Neuman and Schonbach (1974) and the expansion coefficients are given by $\rho_{\ell}=\alpha_{\ell}\sum_{k=0}^{M}P_{\ell}(k)r_{k}\,,$ (35) where $\alpha_{\ell}$ is a normalization constant. The contribution to the inner products from each frequency band are given by $\sum_{k=0}^{M}s_{k}r_{k}\simeq\sum_{\ell=0}^{1}\alpha_{\ell}^{-1}\rho_{\ell}\sigma_{\ell}\,,$ (36) where $\sigma_{\ell}$ are the expansion coefficients for the slowly varying function $s(f)$ and the sum has been restricted to just the first two terms in the Legendre expansion, which is usually sufficient when using short frequency bands, $\Delta f\leq 4$ Hz. For the slowly varying function $s(f)$ the required expansion coefficients in the $k<K$ frequency band are given by $\sigma_{0}=(s((k+1)\Delta f)+s(k\Delta f))/2$ and $\sigma_{1}=(s((k+1)\Delta f)-s(k\Delta f))/2$. The sum (36) includes the first and last bins in each frequency band, so there is a double counting of the contributions from these bins, which can be corrected for by subtracting the sum over the $K-2$ repeated values values. The heterodyning procedure speeds up the likelihood calculations by a factor of $\sim M$, with the largest speed up being for low mass, long duration signals such as those from binary neutron star mergers. ### II.7 Examples from GWTC-2 To illustrate the performance of the sampler, two examples were chosen from the second Gravitational Wave Transient Catalog, GWTC-2 Abbott et al. (2020a). The first example, GW190924_021846, was chosen as it was one of the signals flagged for glitch removal.The second example, GW190719_215514, was chosen as it has among the lowest signal-to-noise ratios, and thus posed more of a challenge for the initial search. The prior ranges on the masses were set between $0.25M_{\odot}$ and $150M_{\odot}$. The priors on the aligned spin components were chosen to correspond to a uniform distribution of spin directions and magnitudes in an effort to mimic the priors used in the reference LIGO/Virgo analyses which using the IMRPhenomPv2 precessing spin model Hannam et al. (2014). The analyses shown here used the IMRPhenomD phenomenological model Khan et al. (2016), which describes the dominant $\ell=|m|=2$ mode of a quasi-circular, spin-aligned binary system. To stay within the domain of validity of this model the maximum spin magnitude was set to $\chi_{\rm max}=0.85$ and the maximum mass ratio was set to $m_{1}/m_{2}<18$. Figure 9: Time-frequency maps of LIGO Livingston data centered at GPS time 1253326744. The black line indicates the reconstructed time-frequency track for GW190924_021846. The upper panel shows the raw whitened data, while the lower panel shows the whitened data after wavelet de-noising. Figure 9 illustrates the output of the search phase and glitch removal for GW190924_021846. A moderately loud glitch that intersects the time-frequency track of the signal was identified and removed from the data. Figure 10: A comparison of parameter inference for GW190924_021846 showing the preferred LALinference IMRPhenomPv2 samples and the QuickCBC IMRPhenomD samples. Figure 10 compares the output of the full QuickCBC analysis using the internally de-noised with the publicly released LALinference samples available from the GWOSC website. The LALinference used a BayesWave PSD and glitch model. The results show good agreement, the main difference being that the QuickCBC analysis took a few minutes while the LALinference analysis took a few days. Figure 11: A comparison of parameter inference for GW190719_215514 showing the preferred LALinference IMRPhenomPv2 samples and the QuickCBC IMRPhenomD samples. Figure 11 compares the QuickCBC and LALinference analyses for the low signal- to-noise ratio event GW190719_215514. The weakness of the signal posed no obstacle to the QuickCBC analysis, with the search phase locking onto the signal after a few hundred iterations. The posterior distributions from the two samplers are again in good agreement. ## III Summary The QuickCBC analysis pipeline is an end-to-end, open-source tool for gravitational wave data analysis. Its key features are speed and robustness against noise transients. The main limitation of the pipeline is that it currently only works with the IMRPhenomD waveform model. A near-term development goal is to expand the range of waveform models, starting with the IMRPhenomHM model London et al. (2018), which includes contributions from higher modes, and the IMRPhenomD_NRTidal model Dietrich et al. (2019) which includes tidal effects for binanry neutron star mergers. A longer term goal is to add precessing spin models. Possible uses for the QuickCBC pipeline for researchers outside the LIGO/Virgo collaboration are as a platform to develop novel analyses that can be applied to the publicly released data. Within the LIGO/Virgo collaboration the pipeline could be used to generate low latency sky maps, and to provide estimates for how likely it is that the system will result in the disruption of a neutron star, and thus a good candidate for producing an electromagnetic counterpart. ## Acknowledgments The author is grateful for the support provided by NSF award PHY1912053. This work was initiated while the author was on sabbatical at the Observatoire de la Côte d’Azur, kindly hosted by Nelson Christensen. Discussions with Tyson Littenberg, Katerina Chatziioannou and Marcella Wijngaarden were very helpful. The author greatly appreciates Marcella Wijngaarden’s help in tracking down an error in the sky localization algorithm, Charlie Hoy’s help in extracting the LALinference samples and Bence Bécsy’s for writing the scripts for automating the running of the pipeline in response to triggers from the Gravitational- Wave Candidate Event Database. The author appreciates feedback on a draft version from Will Farr and Nelson Christensen. This research has made use of data obtained from the Gravitational Wave Open Science Center (https://www.gw- openscience.org), a service of LIGO Laboratory, the LIGO Scientific Collaboration and the Virgo Collaboration. LIGO is funded by the U.S. National Science Foundation. Virgo is funded by the French Centre National de Recherche Scientifique (CNRS), the Italian Istituto Nazionale della Fisica Nucleare (INFN) and the Dutch Nikhef, with contributions by Polish and Hungarian institutes. ## References * Abbott et al. (2016) B. P. Abbott et al. (Virgo, LIGO Scientific), Phys. Rev. X6, 041015 (2016), eprint 1606.04856. * Abbott et al. (2019) B. Abbott et al. (LIGO Scientific, Virgo), Phys. Rev. X 9, 031040 (2019), eprint 1811.12907. * Abbott et al. (2020a) R. Abbott et al. (LIGO Scientific, Virgo) (2020a), eprint 2010.14527. * Messick et al. (2017) C. Messick, K. Blackburn, P. Brady, P. Brockill, K. Cannon, R. Cariou, S. Caudill, S. J. Chamberlin, J. D. Creighton, R. Everett, et al., Physical Review D 95 (2017), ISSN 2470-0029, URL http://dx.doi.org/10.1103/PhysRevD.95.042001. * Klimenko et al. (2016) S. Klimenko, G. Vedovato, M. Drago, F. Salemi, V. Tiwari, G. Prodi, C. Lazzaro, K. Ackley, S. Tiwari, C. Da Silva, et al., Physical Review D 93 (2016), ISSN 2470-0029, URL http://dx.doi.org/10.1103/PhysRevD.93.042004. * Canton et al. (2020) T. D. Canton, A. H. Nitz, B. Gadre, G. S. Davies, V. Villa-Ortega, T. Dent, I. Harry, and L. Xiao, _Realtime search for compact binary mergers in advanced ligo and virgo’s third observing run using pycbc live_ (2020), eprint 2008.07494. * Adams et al. (2016) T. Adams, D. Buskulic, V. Germain, G. M. Guidi, F. Marion, M. Montani, B. Mours, F. Piergiovanni, and G. Wang, Classical and Quantum Gravity 33, 175012 (2016), ISSN 1361-6382, URL http://dx.doi.org/10.1088/0264-9381/33/17/175012. * Chu (2017) Q. Chu, Ph.D. thesis, The University of Western Australia (2017). * Veitch et al. (2015) J. Veitch, V. Raymond, B. Farr, W. Farr, P. Graff, S. Vitale, B. Aylott, K. Blackburn, N. Christensen, M. Coughlin, et al., Physical Review D 91 (2015), ISSN 1550-2368, URL http://dx.doi.org/10.1103/PhysRevD.91.042003. * Ashton et al. (2019) G. Ashton, M. Hübner, P. D. Lasky, C. Talbot, K. Ackley, S. Biscoveanu, Q. Chu, A. Divakarla, P. J. Easter, B. Goncharov, et al., The Astrophysical Journal Supplement Series 241, 27 (2019), ISSN 1538-4365, URL http://dx.doi.org/10.3847/1538-4365/ab06fc. * Viets et al. (2018) A. D. Viets, M. Wade, A. L. Urban, S. Kandhasamy, J. Betzwieser, D. A. Brown, J. Burguet-Castell, C. Cahillane, E. Goetz, K. Izumi, et al., Classical and Quantum Gravity 35, 095015 (2018), ISSN 1361-6382, URL http://dx.doi.org/10.1088/1361-6382/aab658. * Collaboration et al. (2018) V. Collaboration, F. Acernese, T. Adams, K. Agatsuma, L. Aiello, A. Allocca, M. A. Aloy, A. Amato, S. Antier, M. Arène, et al., _Calibration of advanced virgo and reconstruction of the gravitational wave signal h(t) during the observing run o2_ (2018), eprint 1807.03275. * Driggers et al. (2019) J. Driggers, S. Vitale, A. Lundgren, M. Evans, K. Kawabe, S. Dwyer, K. Izumi, R. Schofield, A. Effler, D. Sigg, et al., Physical Review D 99 (2019), ISSN 2470-0029, URL http://dx.doi.org/10.1103/PhysRevD.99.042001. * Davis et al. (2019) D. Davis, T. Massinger, A. Lundgren, J. C. Driggers, A. L. Urban, and L. Nuttall, Classical and Quantum Gravity 36, 055011 (2019), ISSN 1361-6382, URL http://dx.doi.org/10.1088/1361-6382/ab01c5. * Vajente et al. (2020) G. Vajente, Y. Huang, M. Isi, J. Driggers, J. Kissel, M. Szczepańczyk, and S. Vitale, Physical Review D 101 (2020), ISSN 2470-0029, URL http://dx.doi.org/10.1103/PhysRevD.101.042003. * Cornish et al. (2020) N. J. Cornish, T. B. Littenberg, B. Bécsy, K. Chatziioannou, J. A. Clark, S. Ghonge, and M. Millhouse (2020), eprint 2011.09494. * Abbott et al. (2020b) B. P. Abbott et al. (LIGO Scientific, Virgo), Class. Quant. Grav. 37, 055002 (2020b), eprint 1908.11170. * Abbott et al. (2018) B. P. Abbott, R. Abbott, T. D. Abbott, M. R. Abernathy, F. Acernese, K. Ackley, C. Adams, T. Adams, P. Addesso, R. X. Adhikari, et al., Classical and Quantum Gravity 35, 065010 (2018), ISSN 1361-6382, URL http://dx.doi.org/10.1088/1361-6382/aaaafa. * Allen (2005) B. Allen, Phys. Rev. D 71, 062001 (2005), eprint gr-qc/0405045. * Usman et al. (2016) S. A. Usman et al., Class. Quant. Grav. 33, 215004 (2016), eprint 1508.02357. * Cornish and Littenberg (2015) N. J. Cornish and T. B. Littenberg, Class. Quant. Grav. 32, 135012 (2015), eprint 1410.3835. * Littenberg and Cornish (2015) T. B. Littenberg and N. J. Cornish, Phys. Rev. D 91, 084034 (2015), URL https://link.aps.org/doi/10.1103/PhysRevD.91.084034. * Abbott et al. (2017) B. Abbott et al. (LIGO Scientific, Virgo), Phys. Rev. Lett. 119, 161101 (2017), eprint 1710.05832. * Pankow et al. (2018) C. Pankow et al., Phys. Rev. D 98, 084016 (2018), eprint 1808.03619. * Singer and Price (2016) L. P. Singer and L. R. Price, Phys. Rev. D 93, 024013 (2016), eprint 1508.03634. * Pankow et al. (2015) C. Pankow, P. Brady, E. Ochsner, and R. O’Shaughnessy, Phys. Rev. D 92, 023002 (2015), eprint 1502.04370. * George and Huerta (2018) D. George and E. Huerta, Phys. Lett. B 778, 64 (2018), eprint 1711.03121. * Wysocki et al. (2019) D. Wysocki, R. O’Shaughnessy, J. Lange, and Y.-L. L. Fang, Phys. Rev. D 99, 084026 (2019), eprint 1902.04934. * Smith et al. (2020) R. J. Smith, G. Ashton, A. Vajpeyi, and C. Talbot, Mon. Not. Roy. Astron. Soc. 498, 4492 (2020), eprint 1909.11873. * Delaunoy et al. (2020) A. Delaunoy, A. Wehenkel, T. Hinderer, S. Nissanke, C. Weniger, A. R. Williamson, and G. Louppe (2020), eprint 2010.12931. * Torrence and Compo (1998) C. Torrence and G. P. Compo, Bulletin of the American Meteorological Society 79, 61 (1998). * Cornish (2010) N. J. Cornish (2010), eprint 1007.4820. * Cornish and Shuman (2020) N. J. Cornish and K. Shuman, Phys. Rev. D 101, 124008 (2020), eprint 2005.03610. * Chatziioannou et al. (2020) K. Chatziioannou, N. J. Cornish, M. Wijngaarden, and T. B. Littenberg (2020), unpublished Manuscript. * Chatziioannou et al. (2021) K. Chatziioannou, N. Cornish, M. Wijngaarden, and T. B. Littenberg (2021), eprint 2101.01200. * Swendsen and Wang (1986) R. H. Swendsen and J.-S. Wang, Phys. Rev. Lett. 57, 2607 (1986), URL https://link.aps.org/doi/10.1103/PhysRevLett.57.2607. * Cornish (2016) N. J. Cornish (2016), eprint 1606.00953. * Littenberg and Cornish (2009) T. B. Littenberg and N. J. Cornish, Physical Review D 80 (2009), ISSN 1550-2368, URL http://dx.doi.org/10.1103/PhysRevD.80.063007. * Vousden et al. (2015) W. D. Vousden, W. M. Farr, and I. Mandel, Monthly Notices of the Royal Astronomical Society 455, 1919–1937 (2015), ISSN 1365-2966, URL http://dx.doi.org/10.1093/mnras/stv2422. * Ter Braak (2006) C. J. Ter Braak, Statistics and Computing 16, 239 (2006). * Husa et al. (2016) S. Husa, S. Khan, M. Hannam, M. Pürrer, F. Ohme, X. Jiménez Forteza, and A. Bohé, Phys. Rev. D 93, 044006 (2016), eprint 1508.07250. * Khan et al. (2016) S. Khan, S. Husa, M. Hannam, F. Ohme, M. Pürrer, X. Jiménez Forteza, and A. Bohé, Phys. Rev. D 93, 044007 (2016), eprint 1508.07253. * Hannam et al. (2014) M. Hannam, P. Schmidt, A. Bohé, L. Haegel, S. Husa, F. Ohme, G. Pratten, and M. Pürrer, Physical Review Letters 113 (2014), ISSN 1079-7114, URL http://dx.doi.org/10.1103/PhysRevLett.113.151101. * Neuman and Schonbach (1974) C. P. Neuman and D. I. Schonbach, International Journal for Numerical Methods in Engineering 8, 743 (1974), eprint https://onlinelibrary.wiley.com/doi/pdf/10.1002/nme.1620080406, URL https://onlinelibrary.wiley.com/doi/abs/10.1002/nme.1620080406. * London et al. (2018) L. London, S. Khan, E. Fauchon-Jones, C. García, M. Hannam, S. Husa, X. Jiménez-Forteza, C. Kalaghatgi, F. Ohme, and F. Pannarale, Phys. Rev. Lett. 120, 161102 (2018), URL https://link.aps.org/doi/10.1103/PhysRevLett.120.161102. * Dietrich et al. (2019) T. Dietrich, S. Khan, R. Dudi, S. J. Kapadia, P. Kumar, A. Nagar, F. Ohme, F. Pannarale, A. Samajdar, S. Bernuzzi, et al., Physical Review D 99 (2019), ISSN 2470-0029, URL http://dx.doi.org/10.1103/PhysRevD.99.024029.
8k
arxiv_papers
2101.01189
# Ultrafast Frustration-Breaking and Magnetophononic Driving of Singlet Excitations in a Quantum Magnet F. Giorgianni Paul Scherrer Institute, CH-5232 Villigen-PSI, Switzerland B. Wehinger Paul Scherrer Institute, CH-5232 Villigen-PSI, Switzerland Department of Quantum Matter Physics, University of Geneva, CH-1211 Geneva 4, Switzerland European Synchrotron Radiation Facility, 71 Av. des Martyrs, 38000 Grenoble, France S. Allenspach Paul Scherrer Institute, CH-5232 Villigen-PSI, Switzerland Department of Quantum Matter Physics, University of Geneva, CH-1211 Geneva 4, Switzerland N. Colonna Paul Scherrer Institute, CH-5232 Villigen-PSI, Switzerland National Centre for Computational Design and Discovery of Novel Materials (MARVEL), Ecole Polytechnique Fédérale de Lausanne (EPFL), CH-1015 Lausanne, Switzerland C. Vicario Paul Scherrer Institute, CH-5232 Villigen-PSI, Switzerland P. Puphal Paul Scherrer Institute, CH-5232 Villigen-PSI, Switzerland Max Planck Institute for Solid State Research, Heisenbergstrasse 1, 70569 Stuttgart, Germany E. Pomjakushina Paul Scherrer Institute, CH-5232 Villigen-PSI, Switzerland B. Normand Paul Scherrer Institute, CH-5232 Villigen-PSI, Switzerland Lehrstuhl für Theoretische Physik I, Technische Universität Dortmund, Otto-Hahn-Strasse 4, 44221 Dortmund, Germany Institute of Physics, Ecole Polytechnique Fédérale de Lausanne (EPFL), CH-1015 Lausanne, Switzerland Ch. Rüegg Paul Scherrer Institute, CH-5232 Villigen-PSI, Switzerland Department of Quantum Matter Physics, University of Geneva, CH-1211 Geneva 4, Switzerland Institute of Physics, Ecole Polytechnique Fédérale de Lausanne (EPFL), CH-1015 Lausanne, Switzerland Institute of Quantum Electronics, ETH Zürich, CH-8093 Hönggerberg, Switzerland ###### Abstract Ideal magnetic frustration forms the basis for the emergence of exotic quantum spin states that are entirely nonmagnetic. Such singlet spin states are the defining feature of the Shastry-Sutherland model, and of its faithful materials realization in the quantum antiferromagnet SrCu2(BO3)2. To address these states on ultrafast timescales, despite their lack of any microscopic order parameter, we introduce a nonlinear magnetophononic mechanism to alter the quantum spin dynamics by driving multiple optical phonon modes coherently and simultaneously. We apply intense terahertz pulses to create a nonequilibrium modulation of the magnetic interactions that breaks the ideal frustration of SrCu2(BO3)2, such that previously forbidden physics can be driven in a coherent manner. Specifically, this driving populates a purely magnetic excitation, the singlet branch of the two-triplon bound state, by resonance with the difference frequency of two pumped phonons. Our results demonstrate how light-driven phonons can be used for the ultrafast and selective manipulation of interactions in condensed matter, even at frequencies far from those of the pump spectrum, offering valuable additional capabilities for the dynamical control of quantum many-body phenomena. ## I Introduction Using ultrafast lasers to access all the intrinsic interaction timescales of correlated quantum materials opens a new window on fundamental processes in nonequilibrium many-body physics. Coherent light sources developed to combine ultrafast time structure and high intensity at the appropriate terahertz (THz) or infrared (IR) frequencies [1, 2] have been used in complex condensed matter to enhance superconductivity [3], drive metal-insulator transitions [4], manipulate multiferroic order [5], and “Floquet engineer” the electronic band structure [6]. This type of dynamical driving, in which the equilibrium (static) properties of an unconventional quantum state remain largely unaltered, has the potential to reveal a number of previously hidden phenomena. In ultrafast magnetism, the magnetic field of a light pulse can drive precessional spin dynamics and spin waves in ordered antiferromagnets [7, 8], while the electric field can modify the magnetic interactions [9]. Strong lattice excitations have been used to melt magnetic order [10] and to induce spin waves through an effective magnetic field [11]. The concept of magnetophononics, the modulation of magnetic exchange interactions by ultrafast coherent lattice displacements, has been discussed theoretically [12], and resonant phononic effects observed in ordered magnetic materials have been ascribed in part to the exchange interactions [13] or fully to crystal-field effects [14]. While the ultrafast manipulation of ordered phases is developing towards applications in spintronics, the situation in quantum magnets that lack any magnetic order remains largely unexplored. Figure 1: Coherent lattice control in the time domain. (a) An intense THz pulse (blue) with electric field polarized linearly along the $a^{\prime}$ axis of a SrCu2(BO3)2 crystal provides coherent driving of dipole-active lattice vibrations. A femtosecond near-IR (NIR) probe pulse (red) measures the polarization changes. (b-c) THz electric field and polarization rotation of the probe as functions of the delay time, $t$. Inset: THz-driven dynamics after filtering of the fast component to reveal coherent oscillations associated with the low-lying magnetic excitation (dark blue). In this work we begin the quest to control the properties of nonordered quantum magnetic materials. The paradigm of ideal frustration is the fundamental ingredient in all of the complex quantum many-body states in magnetism, most of which emerge from rather simple spin Hamiltonians [15]. Its simplest form is geometrical frustration, which has been realized in a wide range of materials hosting antiferromagnetically interacting spins in the triangle-based motifs of the kagome, pyrochlore, Shastry-Sutherland, and other lattices. More complex forms of ideal frustration have been produced using magnetic interactions that are anisotropic in spin or real space, examples including spin ices, SU(N) magnets, and (proximate) Kitaev systems [16]. However, the characteristic properties of the resulting ground and excited states, which can include both gapped and gapless quantum spin liquids [17], fractional quasiparticles, topological order, and long-ranged entanglement [16], are often undetectable by the conventional probes of experimental condensed matter. This makes them ideal candidates for ultrafast probing. For our study (Fig. 1) we choose SrCu2(BO3)2, an archetypal quantum magnetic material whose physics is dominated by local quantum mechanical singlet states [18]. The singlet encapsulates the essence of quantum magnetism, where the fluctuating spin variables combine into both local and global states of especially low energy that have no external magnetic properties [19]. The ideally frustrated geometry of SrCu2(BO3)2 [Fig. 1(a)] realizes a spin model formulated by Shastry and Sutherland specifically for its exact dimer-singlet ground state [20], and if an applied pressure is used to alter the interaction parameters then it undergoes a first-order quantum phase transition (QPT) to a four-site “plaquette” singlet state [21, 22, 23, 24]. This ideal magnetic frustration also causes SrCu2(BO3)2 to display an anomalous spectrum of spin excitations and complex phase transitions both in an applied magnetic field [25, 26, 27] and as a function of temperature [24]. For the goal of ultrafast modulation of magnetic properties in nonordered materials such as SrCu2(BO3)2, the magnetophononic mechanism is an obvious candidate. Experiments applying static pressure to quantum magnets have created novel ground and excited states [22, 28] and have controlled QPTs in both localized [29] and itinerant magnetic systems [30], demonstrating not only the sensitivity of the magnetic interactions to the atomic positions but also the potential for qualitatively new dynamical phenomena. However, modulating an interaction, $J$, at some available phonon frequency, $\omega_{i}$, does not constitute control of dynamical properties: a priori there is no match between the energy scales of the dominant IR-active phonon modes and of the elementary magnetic excitations in any material, and we will see that SrCu2(BO3)2 is a case in point. To achieve such frequency matching, we extend magnetophononics to the nonlinear regime, where sums and differences of the available phonon frequencies span a wide energy range, but extremely intense electric fields are required. By using coherent THz pulses to drive IR-active phonons in SrCu2(BO3)2 (Fig. 1), we demonstrate experimentally how the leading difference frequency creates a nonequilibrium occupation of the lowest excited singlet state. We establish the theoretical framework for the origin of this phenomenon, in the breaking of ideal magnetic frustration within the driven lattice structure, which we verify by density functional theory (DFT) calculations. The structure of this article is as follows. In Sec. II we review the properties of SrCu2(BO3)2. In Sec. III we present the results of our ultrafast spectroscopic investigations. Section IV contains a qualitative and quantitative account of the nonlinear magnetophononic phenomena we observe. In Sec. V we discuss the consequences of our findings for the selective static and dynamical control of materials properties both within and beyond quantum magnetism. Figure 2: Low-energy spin and phonon modes in SrCu2(BO3)2. (a) Schematic representation of the spin network in SrCu2(BO3)2, showing how Cu2+ ions ($S=1/2$) form the Shastry-Sutherland geometry with interaction parameters $J$ on the Cu-Cu dimers and $J^{\prime}$ between neighboring orthogonal dimers. The localized spin excitations above the singlet ground state (red dimers) are individual triplons (blue dimers) and two-triplon bound states (TBS, blue shaded regions). (b) Low-energy spectrum of SrCu2(BO3)2 at ${\bf k}=0$. In the spin sector, the STTBS lies close to the one-triplon excitation, the TTTBS has a smaller binding energy, and the QTTBS lies close to the threshold for creating two isolated triplons (energies of the lowest TTTBS and QTTBS taken from Ref. [31]). In the lattice sector, we show the frequencies of the two phonons excited most strongly in our experiment, $\omega_{a}=3.80$ THz and $\omega_{b}=4.60$ THz. The light blue line and gray shading represent for comparison the amplitude of the pump spectrum [Fig. 3(a)]. (c-d) Lattice- displacement eigenvectors for the phonon modes at $\omega_{a}$ and $\omega_{b}$; because both phonons are $E$-symmetric, we show one of the two degenerate modes in each case. $J$ and $J^{\prime}$ depend on the bond lengths and angles in the superexchange paths involving the Cu, O, and B atoms. ## II Shastry-Sutherland Model and SrCu2(BO3)2 The Shastry-Sutherland model, for $S=1/2$ spins with Heisenberg interactions on the two-dimensional orthogonal-dimer network shown in Fig. 2(a) [20], is one of the most intriguing in quantum magnetism. The exact and entirely nonmagnetic ground state of singlet quantum dimers is found when the ratio of interdimer ($J^{\prime}$) to intradimer ($J$) interactions satisfies $\alpha=J’/J\leq 0.675$, above which the QPT occurs to the plaquette-singlet state [21]. It is quite remarkable that this simple model is realized so faithfully in the compound SrCu2(BO3)2 [Fig. 1(a)] [18], with $J$ determined by Cu-O-Cu superexchange processes on the Cu2+ dimer units and $J^{\prime}$ by superexchange through the BO3 units. The magnetic excitation spectrum of SrCu2(BO3)2, depicted for wave vector ${\bf k}=0$ in Fig. 2(b), contains as its lowest mode the “triplon” (singlet-triplet) excitation at $\Delta=2.9$ meV ($\equiv 0.71$ THz), whose dispersion is almost flat in ${\bf k}$ [32] as a consequence of the ideal frustration. The lattice geometry is also responsible for an anomalously strong binding energy for triplon pairs, with the result that the singlet two-triplon bound state (STTBS), the $S=0$ branch of this multiplet, appears just above the one-triplon mode, at 3.6 meV ($\omega_{\rm TBS}=0.87$ THz). At higher energies, additional discrete and continuum excitations include the triplet ($S=1$, TTTBS) and quintet ($S=2$, QTTBS) branches of this bound state. Figure 3: THz field-driven lattice and spin dynamics in the frequency domain. (a) Spectral amplitude (blue) of the data of Fig. 1(c) computed for $3.5\leq t\leq 20$ ps. The primary peaks are (i) $E$-symmetric phonons at 3.80 THz and 4.60 THz, (ii) a $B_{1}$-symmetric Raman phonon at 11.6 THz, and (iii) the TBS excitation at $\omega_{\rm TBS}=0.87$ THz. The light blue line and gray shading show the spectral amplitude of the driving electric field [Fig. 1(b)]. (b) Spectral amplitude measured at 8 K, where the TBS is absent and the driven dynamics reveal no coherent oscillations at $\omega_{\rm TBS}$ (inset). (c) Comparison of low-frequency spectra at 3.5 and 8 K. The black dashed line marks the one-triplon gap, $\Delta=2.9$ meV (0.71 THz), and the red dot $\omega_{\rm TBS}$ from Raman spectroscopy [33, 34]. (d) Temperature- dependence of the TBS, normalized to the peak height. These modes have been studied by a combination of neutron scattering [35], which targets the triplon and TTTBS, and Raman [33, 34], IR [36], and electron spin resonance (ESR) spectroscopies [31], which observe the spectrum at ${\bf k}=0$. To date, a detailed explanation for the phonon-assisted coupling of light to the spin excitations observed by Raman and IR remains elusive due to the inherently incoherent nature of these experiments. The interaction ratio in SrCu2(BO3)2, $\alpha=0.63$, lies close to the QPT of the Shastry-Sutherland model, allowing this transition to be induced under pressure [22]. Recent attention has focused on how the magnetic interactions depend on the geometry of the dimer units [37, 38, 39], making SrCu2(BO3)2 a fascinating and timely candidate for exploring ideally frustrated quantum magnetism on ultrafast timescales. ## III Ultrafast Experiment We perform THz-pump, optical-probe spectroscopy using the apparatus of Ref. [40], whose technical specifications are detailed in Sec. S1 of the Supplemental Materials (SM) [41]. Our experiments use a single-crystal sample of SrCu2(BO3)2 maintained at 3.5 K ($k_{\rm B}T\ll\Delta$), a temperature where the ground state is close to the pure singlet state. As represented in Fig. 1(a), intense light pulses [Fig. 1(b)] with spectral content between 2 and 7 THz [Fig. 2(b)] drive the resonant excitation of dipole-active phonon modes [42]. To estimate the external THz electric-field strength of the pump, we measured the energy per pulse, beam waist, and pulse duration, also reported in Sec. S1, to obtain the value $E_{\rm THz}=3.2$ MVcm-1, which is comparable to other modern high-intensity sources [43, 44]. To probe the driven lattice and spin dynamics, we measured the ultrafast polarization rotations [11], shown in Fig. 1(c), caused by the associated optical birefringence and Faraday effects. These are imprinted on a co-propagating NIR pulse (50 fs, wavelength 800 nm) with a variable the delay time, $t$, and an analysis of the complete time-frequency response function is presented in Sec. S2 of the SM [41]. As the inset of Fig. 1(c) makes clear, the 1 ps pulse creates dynamical oscillations that persist for tens of ps. We observe coherently excited phonons close to the peak of the pump spectrum [Fig. 3(a)], primarily two of the $E$-symmetric modes measured by IR spectroscopy [42], centered at $\omega_{a}=3.80$ and $\omega_{b}=4.60$ THz. As noted above, these phonon frequencies lie far above the primary features in the magnetic spectrum. Nonetheless, we observe a striking response precisely at $\omega_{\rm TBS}=0.87$ THz [Fig. 3(b)], even though the spectral content of the pump is negligible in this frequency range. In the same way, the feature centered at 11.6 THz lies much higher than the spectral content of the pump, and the appearance of this Raman-active $B_{1}$ phonon mode in the measured response indicates that nonlinear phonon mixing is allowing sum-frequency excitation processes. In fact this feature constitutes one of the clearest examples of a sum-frequency phonon excitation yet observed, and thus we analyze it in detail in Sec. S3 of the SM [41], but from the standpoint of magnetophononics it serves only as an indicator of the mechanism for the phenomena we investigate. Henceforth we refer to the STTBS, whose nonmagnetic ($S=0$) character makes it the only low-energy mode one may expect to excite strongly with phonons, simply as the TBS. To identify the excitation at 0.87 THz as the TBS, we repeated the experiment at 8 K [Figs. 3(b-c)], where thermal fluctuations cause the triplons to lose their character [45]. While the spectral features of the lattice remain almost unchanged, the magnetic fingerprint of THz driving has disappeared. As Fig. 3(d) makes clear, the measured amplitude shows rapid quenching above 5 K, exactly as observed for the TBS in Ref. [33], where this behavior was attributed to strong scattering from thermally excited triplets. The reduced lifetime of the $B_{1}$ phonon in Fig. 3(b) also provides evidence of damping processes due to spin-lattice coupling. Figure 4: THz pump strength and frequency-dependence. (a) Linear dependence of phonon amplitudes and (b) quadratic dependence of the TBS amplitude on the electric-field strength. (c) Four different normalized THz pump spectra shown together with the reflectivity of SrCu2(BO3)2 at 4 K (calculated from Ref. [42]); peak field values for the corresponding spectra are indicated in the legend. (d) TBS mode amplitude (colored points), normalized to the square of the peak field, for different values of the THz pump-frequency centroid. Here we show for comparison the real part, $\sigma_{1}(\omega)$, of the optical conductivity. To understand the mechanism driving a purely magnetic excitation, the fact that the spectral content of the pump contains negligible intensity below 2 THz excludes a direct coupling of light to the TBS, which in any case is not IR-active [36]. To probe the indirect origin of the observed spin dynamics, we measure the mode amplitudes as functions of the THz electric-field strength. The IR phonons display the linear dependence expected for resonant excitation [Fig. 4(a)]. By contrast, the TBS amplitude varies quadratically with the field strength [Fig. 4(b)], and thus with lattice displacement, indicating a nonlinear coupling mechanism essentially different from electric or magnetic dipolar interactions [8, 46]. Confirmation that this dependence is quadratic can also be obtained by inverting the polarity of the pump electric field, as we show in Sec. S4 of the SM [41]. For a direct demonstration of which IR phonons provide the driving, we vary the spectral content of the pump [Fig. 4(c)] and measure the TBS amplitude. Again this is large only when the pump spectrum covers $\omega_{a}$ and $\omega_{b}$ [Fig. 4(d)], and there is no sign of coherent magnetic dynamics in off-resonant conditions, such as driving frequencies above 6 THz. This proves that the opto-magnetic coupling is mediated by the resonant lattice excitations and cannot be explained by nonlinear effects involving non- resonant electronic excitations, such as free-carrier generation by THz- induced electronic breakdown [47] or impact ionization [48]. For a quantitative analysis of the effect of the THz pulse on the atomic motions within the sample, we equate the polarization induced by the electric field with the modulation of the dipole moment, which peaks strongly when the pulse frequency is resonant with a phonon mode, denoted by $m$. In this situation $P_{m}=n_{d}\delta_{m}\mu_{m}$, where $\mu_{m}$ is the net charge displacement due to mode $m$, $n_{d}$ is a dipole density, and $\delta_{m}$ is the maximum displacement coordinate of the phonon. In this way we deduce (Sec. S1 of the SM [41]) maximum displacements up to $\delta_{b}=0.17$ Å for the 4.60 THz mode [shown in Fig. 2(d)], which is comparable to the value estimated in SrTiO3 [49]. We stress that $\delta_{m}$ represents the maximum displacement of the most displaced O ion in the SrCu2(BO3)2 structure due to phonon mode $m$, and that the corresponding displacements of the Cu ions are generally smaller (by a factor of 3-4 for the 4.60 THz mode), whence the system does not approach the Lindemann melting criterion. Because the temporal duration of the pump pulse is approximately 0.5 ps [Fig. 1(b)], the amount of energy transferred presents little danger that the THz driving can increase the sample temperature significantly. Figure 5: Dynamic control of magnetic interactions by light-driven phonons. (a) Schematic representation of atomic motions in the $ab$-plane associated with the $\omega_{b}=4.60$ THz phonon. (b) Interaction parameters $J$, $J_{1}^{\prime}$, $J_{2}^{\prime}$, $J_{3}^{\prime}$, $J_{4}^{\prime}$, $\Delta J_{12}^{\prime}$, and $\Delta J_{34}^{\prime}$ calculated for the symmetry-broken lattice structure as functions of the corresponding phonon displacement, $q_{b}$. (c) Temporal modulation of $q_{a}+q_{b}$ calculated using the experimental THz driving field. (d) Corresponding dynamic modulation of $\Delta J_{12}^{\prime}$. (e) Spectrum of driven IR phonons, with weight only at the frequencies $\omega_{a}$ and $\omega_{b}$. Inset: energy-level diagram of the TBS excitation process, which is a resonance between the phonon frequency difference, $\omega_{b}-\omega_{a}$, and $\omega_{\rm TBS}$. (f) Fourier transform of the DFT time series for $\Delta J_{12}^{\prime}(t)$, showing spectral weight at $\omega_{\rm TBS}$. (g) Interaction ratio, $\alpha=\bar{J}^{\prime}/\bar{J}$, where $\bar{J}={\textstyle\frac{1}{2}}(J_{1}+J_{2})$ and $\bar{J}^{\prime}={\textstyle\frac{1}{8}}(J_{1}^{\prime}+...+J_{8}^{\prime})$, shown as a function of $q_{b}$. ## IV Theory: Frustration-Breaking ### IV.1 Analysis We now establish the physical framework for the nonlinear magnetophononic phenomenon we have created. Qualitatively, the lattice displacements due to any phonon excitation alter the instantaneous magnetic interactions [Fig. 5(a)]. To analyze this situation we express the Hamiltonian of the driven system as $H=H_{0}+H_{H}+V_{\rm Ph}+H_{\rm Ph-Em}+H_{D},$ (1) where $H_{0}$ is spin-independent and $H_{H}=J\sum_{\langle ij\rangle}{\vec{S}}_{i}\cdot{\vec{S}}_{j}+J^{\prime}\sum_{\langle\langle ij\rangle\rangle}{\vec{S}}_{i}\cdot{\vec{S}}_{j},$ (2) is the Shastry-Sutherland model at equilibrium: ${\vec{S}}_{i}$ is a spin-1/2 operator located on the Cu2+ ion at site $i$, $J$ and $J^{\prime}$ are the magnetic interactions depicted in Fig. 2(a), and $\langle ij\rangle$ and $\langle\langle ij\rangle\rangle$ denote respectively pairs of sites on intra- and interdimer bonds. $V_{\rm Ph}$ is the phonon potential and $H_{\rm Ph- Em}=\sum_{m}\mu_{m}q_{m}E_{\rm THz}$, the dipole coupling between the lattice and the THz light, is the term driving the excitation of IR-active phonon modes. The action of the coherent lattice deformation due to all of these phonon modes, $q_{t}=\sum_{m}q_{m}$, in modulating the magnetic interaction parameters causes additional coupled spin-phonon terms to enter the Hamiltonian, which we collect in $H_{D}$. To separate the phonon modulation terms in $H_{D}$, it is convenient to perform a Taylor expansion of the magnetic interactions, $J$ and $J^{\prime}$, in powers of the simultaneously driven coherent IR phonon coordinates, which we denote as $q_{m}$, $q_{n}$, …Denoting an arbitrary interaction term as ${\tilde{J}}$, we obtain $\displaystyle{\tilde{J}}(q_{m},q_{n},\dots)$ $\displaystyle=$ $\displaystyle{\tilde{J}}(0)+\left.\frac{\partial{\tilde{J}}}{\partial q_{m}}\right|_{q_{m}=0}q_{m}$ $\displaystyle\;\;\;\;\;\;\;\;+\left.\frac{\partial^{2}{\tilde{J}}}{\partial q_{m}\partial q_{n}}\right|_{q_{m},q_{n}=0}q_{m}q_{n}+\dots$ where the first term is part of $H_{H}$ and the higher terms make clear the direct dependence of the additional contributions on the oscillating phonon coordinates. Both the key features of our experiment become clear immediately in a minimal model with only two harmonic IR phonons, i.e. $q_{t}=q_{a}+q_{b}$, where $q_{a}$ and $q_{b}$ are vectors of normal-mode coordinates with cosinusoidal time structures at respective frequencies $\omega_{a}$ and $\omega_{b}$. The $q_{t}^{2}$ terms appearing in the second line of Eq. (IV.1) lead to spectral components at frequencies $2\omega_{a}$, $\omega_{a}+\omega_{b}$, $2\omega_{b}$, 0, and $\omega_{b}-\omega_{a}$. These quadratic terms provide the leading nonlinear mechanism that allows a very wide range of spin excitation energies to be addressed using the sum and difference frequencies of strongly driven IR phonons, whose frequency range is more restricted. The second key feature is that the unconventional physics of the Shastry- Sutherland model, and by extension of SrCu2(BO3)2 at equilibrium, relies on the ideal frustration of the spin correlations between dimers. The term ${\vec{S}}_{i}\\!\cdot\\!{\vec{S}}_{j}$ acting on a single dimer is an eigenoperator and when acting between dimers it is exactly cancelled by a second ${\vec{S}}_{i}\\!\cdot\\!{\vec{S}}_{j}$ term with equal size ($J^{\prime}$) and effectively opposite sign [Fig. 2(a)]. However, the excited phonons driving the lattice out of equilibrium cause frustration-breaking in the magnetic sector, in the form of a finite interdimer coupling, $\Delta J_{12}^{\prime}=J_{1}^{\prime}(q_{m},q_{n},\dots)-J_{2}^{\prime}(q_{m},q_{n},\dots)$, that connects nearest-neighbor dimers [Fig. 5(a)]. This has the immediate effect of allowing two qualitatively new types of physical process that are forbidden at equilibrium. To make these most transparent we reexpress the nonequilibrium spin Hamiltonian in terms of triplet creation and annihilation operators [35], which represent the excitation of the dimer ($J$) units in Fig. 2(a) into their triplet states. Treating the singlets as a scalar term gives the form $\displaystyle H_{D}$ $\displaystyle=$ $\displaystyle\sum_{i}[\Delta J_{12}^{\prime}(t_{1,i}^{\dagger}t_{2,i+x}+t_{1,i}^{\dagger}t_{2,i+x}^{\dagger})$ $\displaystyle\;\;\;\;\;\;+\Delta J_{34}^{\prime}(t_{2,i}^{\dagger}t_{1,i+x}+t_{2,i}^{\dagger}t_{1,i+x}^{\dagger})$ $\displaystyle\;\;\;\;\;\;+\Delta J_{56}^{\prime}(t_{1,i}^{\dagger}t_{2,i+y}+t_{1,i}^{\dagger}t_{2,i+y}^{\dagger})$ $\displaystyle\;\;\;\;\;\;+\Delta J_{78}^{\prime}(t_{2,i}^{\dagger}t_{1,i+y}+t_{2,i}^{\dagger}t_{1,i+y}^{\dagger})]+{\rm H.c.},$ where the full set of eight inequivalent $J^{\prime}$ bonds is shown in Fig. S5 of the SM [41]. The first term in each bracket describes the propagation of existing triplon excitations between dimers, relieving their strict localization, although the high-frequency oscillation of $\Delta J^{\prime}$ does not allow any quasi-static changes of the flat triplon bands. The second term in Eq. (IV.1) describes two-triplon creation on neighboring dimer pairs, i.e. the direct excitation of the TBS from the singlet quantum ground state ($|s\rangle$). While the triplon pairs are created initially on adjacent dimers, on the timescale of the spin system they will optimize their relative configuration to form the most strongly bound state [35], which is depicted on the right side of Fig. 2(a). Because lattice excitations cannot change the spin quantum numbers ($\Delta S_{\rm tot}=0$) in a spin-isotropic Hamiltonian, only transitions to the STTBS ($S=0$) are allowed. The coefficients $\Delta J_{12}^{\prime}$, …$\Delta J_{78}^{\prime}$ in Eq. (IV.1) are a direct expression of the frustration-breaking, making fully explicit the origin of this phonon-driven triplon pair creation. ### IV.2 Lattice dynamics and density functional theory We perform two types of quantitative calculation within this framework. To model the nonlinear effects of the driven phonons, we follow Ref. [50] by considering $H_{D}(q_{m},q_{n})$ as a dynamic perturbation through the coupling term $H_{\rm Em-Ph}$ in Eq. (1). In our experiments (Sec. III) two different IR-active phonon modes show the strongest driving. Labelling these by $a$ and $b$, their equations of motion are $\displaystyle{\ddot{q}}_{a}+\gamma_{a}{\dot{q}}_{a}+\omega_{a}^{2}q_{a}$ $\displaystyle=$ $\displaystyle-B_{a}E_{\rm THz}(t),$ $\displaystyle{\ddot{q}}_{b}+\gamma_{b}{\dot{q}}_{b}+\omega_{b}^{2}q_{b}$ $\displaystyle=$ $\displaystyle-B_{b}E_{\rm THz}(t),$ (5) where $\omega_{a}=3.80$ THz and $\omega_{b}=4.60$ THz are the experimental phonon frequencies, $\gamma_{a}=0.02$ THz and $\gamma_{b}=0.03$ THz are their respective damping rates (Sec. S2 of the SM [41]), and $E_{\rm THz}$ is the external THz driving field taken from experiment [Fig. 1(b)]. $B_{a}$ and $B_{b}$ are the dipolar coupling constants, which depend on the effective charges ($Z_{\rm eff}^{i}$), the transmission coefficients ($\beta_{m}$), and the reduced masses of the phonon modes, which we deduce from the maximum displacements $\delta_{a}=0.04$ Å and $\delta_{b}=0.17$ Å calculated following Sec. S1 of the SM [41]. From Eq. (5) we compute the net atomic displacement due to the two leading phonon modes, $q_{t}=q_{a}+q_{b}$, as a function of time, obtaining the result shown in Fig. 5(c). The second type of calculation is to compute the magnetic interaction parameters by DFT. These calculations were performed using the Quantum Espresso package [51], an open-source tool for electronic structure calculations based on DFT and the pseudopotential plane-wave technique. Exchange and correlation effects were modelled using the PBE functional [52], augmented by a Hubbard $U$ term to include the strongly correlated nature of the Cu 3$d$ electrons. The calculation of magnetic interactions is a self- consistent process in which the lattice structure is relaxed fully in a selected collinear spin configuration for each fixed value of the effective $U$ parameter and then the total energies of the different magnetic configurations are compared [53] by mapping them onto the terms $H_{0}+H_{H}$ in Eq. (IV.1) to determine equilibrium values for $J$ and $J^{\prime}$ [Eq. (2)]. The full details of this process and of its intrinsic accuracy are presented in Sec. S5 of the SM [41]. In the initial step of our calculations we used this procedure to refine $U$, deducing that $U=11.4$ eV yields the magnetic interactions $J=7.24$ meV (84.0 K) and $J^{\prime}=4.28$ meV (49.7 K), in good agreement with experimental findings [35] and yielding a coupling ratio $\alpha=0.592$ for the equilibrium lattice structure. We then extended these methods to estimate the phonon- induced modulation of the magnetic interaction parameters by computing the temporal evolution of the frustration-breaking terms, $\Delta J^{\prime}(t)$. For this we evaluated the magnetic interactions in a dense sequence of different “frozen phonon” configurations of the lattice. Each atom in the SrCu2(BO3)2 structure was displaced by $q_{m}{\hat{u}}_{im}$, where $q_{m}$ is the instantaneous displacement amplitude of excited phonon mode $m$, ${\hat{u}}_{im}$ denotes the set of normal-mode vectors taken from Ref. [42], and we restricted our calculations to $m=a,b$ with $\omega_{a}=3.80$ and $\omega_{b}=4.60$ THz. The displacements of the atoms from equilibrium reduce the lattice symmetry, as represented in Fig. 5(a), and hence require more complex calculations of more interaction parameters in a larger unit cell, as detailed in Sec. S5 of the SM [41]. In Fig. 5(b) we show the four different interdimer interaction parameters $(J_{1}^{\prime},J_{2}^{\prime},J_{3}^{\prime},J_{4}^{\prime})$ neighboring each vertical spin dimer as functions of the largest scalar phonon displacement amplitude, $q_{b}$, whose maximum value is $\delta_{b}$. Figure 5(b) shows that, while the phonon-induced variations in $J$ are almost quadratic, the interactions $J^{\prime}_{1,...,4}$ all have significant linear components. These result in large values of the frustration-breaking difference interactions, $\Delta J^{\prime}$, well in excess of 10 K (i.e. reaching 20-30% of $J^{\prime}$ by this estimate). Extending these results into the time domain, in Fig. 5(c) we show $q_{t}$ and in Fig. 5(d) the resulting evolution of $\Delta J_{12}^{\prime}$. In the frequency domain, the IR phonons excited by the THz pump [Fig. 5(e)] create a nonlinear modulation of the interaction parameters with a rich spectrum [Fig. 5(f)]. This spectrum includes significant weight at the frequency $\omega_{b}-\omega_{a}$, whose resonance with the TBS [inset, Fig. 5(e)] ensures large values of the matrix element for two-triplon creation, $\langle{\rm TBS}|\Delta J^{\prime}(\omega)t_{1,i}^{\dagger}t_{2,j}^{\dagger}|s\rangle$, and hence the strong nonequilibrium population of this purely magnetic excitation that we report in Figs. 3(a) and 3(c). ## V Discussion Ideal frustration of magnetic interactions is the core attribute that leads quantum spin systems to form entirely unconventional and exotic phases, although some of the most fundamentally novel properties (such as fractional excitations, topological order, and entanglement) often remain largely hidden to conventional experimental probes. In this context, a static breaking of ideal frustration is often regarded as trivial, merely removing the properties that make the system special; in SrCu2(BO3)2, a static $\Delta J^{\prime}$ would restore triplon propagation, destabilize the bound states, and favor the nearby phase of long-ranged antiferromagnetic order [21]. However the dynamical frustration-breaking we effect means that the static and equilibrium state of ideal frustration is retained, excluding most of these more trivial hallmarks and thereby offering an alternative route to the selective and controlled investigation of certain hidden properties, as we discuss next. While much has been made of “controlling quantum systems” on ultrafast timescales, we stress that ultrafast processes in magnetic materials have to date been restricted in large part to destroying, modulating the magnitude, or switching the direction of an ordered moment [8, 10, 14]. Here we have solved two further fundamental problems on the route to ultrafast dynamical control, namely the coupling of light to a quantum magnet with no magnetic order and the frequency-matching problem by which the light-driven pump (the phonons) can be tuned to the target (magnetic) excitations. As a result we have achieved control in the form of creating a highly nonequilibrium population of a target excited state. However, in the limit exemplified by SrCu2(BO3)2, where the phonons are much faster than the spin excitations, we have not controlled the energy of the TBS; by contrast, slow phonons in a material with high magnetic energies would be one simple situation in which to effect this type of control. As a more general route to nonresonant interaction control, Fig. 5(g) indicates how the method of driving multiple IR-active phonons in the harmonic regime causes a strong increase in the instantaneous ratio of the spatially averaged interactions, $\alpha=\bar{J}^{\prime}/\bar{J}$. While recent experimental [38] and theoretical [39] studies have highlighted the role of the “pantograph” phonon [37] in modulating $\alpha$, this $A_{1}$-symmetric mode, found at 6.1 THz in Raman measurements, cannot be excited coherently in the same way as IR phonons. Because the phonons are so much faster than the magnetic interactions, the important quantity is the time-average of $\alpha(q_{b})$ in Fig. 5(g), whose rise with $q_{b}$ indicates a driven, dynamical approach to the static QPT into the plaquette phase of SrCu2(BO3)2, i.e. with no frustration-breaking effects on long timescales. Although it represents the effects of only a single phonon, Fig. 5(g) suggests that the dynamical approach to controlling $\alpha$ is clearly comparable in range with hydrostatic pressure techniques [27, 22]; indeed our current studies, which were not optimized for this purpose, indicate a high potential for achieving significantly stronger static effects by further increasing the THz field strength and by tailoring of the excited phonon modes. Beyond the primary superexchange interactions ($J$ and $J^{\prime}$), further terms in the modulated spin Hamiltonian can also produce spin excitations that are normally weak or forbidden. SrCu2(BO3)2 has Dzyaloshinskii-Moriya (DM) interactions [31], which are small (3% of $J$), but are important in applied magnetic fields, including to create topological states [35]. The driving of symmetry-breaking IR phonons also creates dynamical antisymmetric spin interactions directly analogous to the modulation of $J$ and $J^{\prime}$. In SrCu2(BO3)2, our experiments demonstrate no discernible role for driven DM interactions, because the one-triplon excitation process ($\Delta S_{\rm tot}=1$) at 0.71 THz should be excited by the same difference-frequency envelope as the TBS, but clearly no feature is visible above the detection threshold at this frequency in Fig. 3(c). More generally, however, one may use selective nonlinear IR phonon driving to manipulate both the symmetric and antisymmetric magnetic interactions in systems such as chiral spin liquids and skyrmion lattices, where both couplings play an essential role. In summary, we have demonstrated coherent light-driven spin dynamics in a purely quantum magnetic system. By the resonant excitation of phonons and their nonlinear mixing to span a very wide (sum and difference) frequency range, our experimental protocol meets the intrinsic challenge of spin-phonon frequency-matching. We have applied it to SrCu2(BO3)2 and achieved the selective excitation of the singlet branch of the two-triplon bound state without exciting individual triplons. We have shown theoretically how this process occurs, once the driven phonons relieve the ideal magnetic frustration, and have performed DFT calculations to estimate the magnitude of the interaction modulation. Our results open an additional time dimension for exploring quantum magnetic phenomena that to date have been probed only by quasi-static stimuli, and because it uses the lattice as its medium our method is applicable without restriction to all the exotic spin states available in quantum magnetic materials. In view of ongoing technical progress at all the frontiers of narrow-band spectra, high intensities, and ultrashort pulses, one may anticipate order-of-magnitude improvements in both driving and detection that will place dynamically driven phenomena within reach in multiple classes of quantum material. ###### Acknowledgements. We are grateful to C. Homes for sharing the calculated phonon eigenvectors. We thank T. Cea, M. Först, S. Furuya, A. Kimel, R. Mankowsky, F. Mila, A. Razpopov, G. S. Uhrig, and R. Valentí for valuable discussions. This research was supported by the European Research Council (ERC) within the EU Horizon 2020 research and innovation programme under Grant No. 681654 (HyperQC), by the MARVEL National Centre of Competence in Research of the Swiss National Science Foundation and by the DFG (German Research Foundation) through Grants No. UH90/13-1 and UH90/14-1. ## References * Salén _et al._ [2019] P. Salén, M. Basini, S. Bonetti, J. Hebling, M. Krasilnikov, A. Y. Nikitin, G. Shamuilov, Z. Tibai, V. Zhaunerchyk, and V. Goryashko, Matter manipulation with extreme terahertz light: Progress in the enabling THz technology, Phys. Rep. 836-837, 1 (2019). * Nicoletti and Cavalleri [2016] D. Nicoletti and A. Cavalleri, Nonlinear light–matter interaction at terahertz frequencies, Adv. Opt. Photon. 8, 401 (2016). * Mitrano _et al._ [2016] M. Mitrano, A. Cantaluppi, D. Nicoletti, S. Kaiser, A. Perucchi, S. Lupi, P. Di Pietro, D. Pontiroli, M. Riccò, S. R. Clark, D. Jaksch, and A. Cavalleri, Possible light-induced superconductivity in K3C60 at high temperature, Nature 530, 461 (2016). * Caviglia _et al._ [2012] A. D. Caviglia, R. Scherwitzl, P. Popovich, W. Hu, H. Bromberger, R. Singla, M. Mitrano, M. C. Hoffmann, S. Kaiser, P. Zubko, S. Gariglio, J.-M. Triscone, M. Först, and A. Cavalleri, Ultrafast Strain Engineering in Complex Oxide Heterostructures, Phys. Rev. Lett. 108, 136801 (2012). * Kubacka _et al._ [2014] T. Kubacka, J. A. Johnson, M. C. Hoffmann, C. Vicario, S. de Jong, P. Beaud, S. Grübel, S.-W. Huang, L. Huber, L. Patthey, Y.-D. Chuang, J. J. Turner, G. L. Dakovski, W.-S. Lee, M. P. Minitti, W. Schlotter, R. G. Moore, C. P. Hauri, S. M. Koohpayeh, V. Scagnoli, G. Ingold, S. L. Johnson, and U. Staub, Large-amplitude spin dynamics driven by a THz pulse in resonance with an electromagnon, Science 343, 1333 (2014). * Oka and Kitamura [2019] T. Oka and S. Kitamura, Floquet Engineering of Quantum Materials, Annu. Rev. Condens. Matter Phys. 10, 387 (2019). * Vicario _et al._ [2013] C. Vicario, C. Ruchert, F. Ardana-Lamas, P. M. Derlet, B. Tudu, J. Luning, and C. P. Hauri, Off-resonant magnetization dynamics phase-locked to an intense phase-stable terahertz transient, Nat. Photonics 7, 720 (2013). * Kampfrath _et al._ [2011] T. Kampfrath, A. Sell, G. Klatt, A. Pashkin, S. Mährlein, T. Dekorsy, M. Wolf, M. Fiebig, A. Leitenstorfer, and R. Huber, Coherent terahertz control of antiferromagnetic spin waves, Nat. Photonics 5, 31 (2011). * Mikhaylovskiy _et al._ [2015] R. Mikhaylovskiy, E. Hendry, A. Secchi, J. Mentink, M. Eckstein, A. Wu, R. Pisarev, V. Kruglyak, M. Katsnelson, T. Rasing, and A. Kimel, Ultrafast optical modification of exchange interactions in iron oxides, Nat. Commun. 6, 8190 (2015). * Först _et al._ [2015] M. Först, A. D. Caviglia, R. Scherwitzl, R. Mankowsky, P. Zubko, V. Khanna, H. Bromberger, S. B. Wilkins, Y.-D. Chuang, W. S. Lee, W. F. Schlotter, J. J. Turner, G. L. Dakovski, M. P. Minitti, J. Robinson, S. R. Clark, D. Jaksch, J.-M. Triscone, J. P. Hill, S. S. Dhesi, and A. Cavalleri, Spatially resolved ultrafast magnetic dynamics initiated at a complex oxide heterointerface, Nat. Mater. 14, 883 (2015). * Nova _et al._ [2017] T. F. Nova, A. Cartella, A. Cantaluppi, M. Först, D. Bossini, R. V. Mikhaylovskiy, A. V. Kimel, R. Merlin, and A. Cavalleri, An effective magnetic field from optically driven phonons, Nat. Phys. 13, 132 (2017). * Fechner _et al._ [2018] M. Fechner, A. Sukhov, L. Chotorlishvili, C. Kenel, J. Berakdar, and N. A. Spaldin, Magnetophononics: Ultrafast spin control through the lattice, Phys. Rev. Mater. 2, 064401 (2018). * Afanasiev _et al._ [2021] D. Afanasiev, J. R. Hortensius, B. A. Ivanov, A. Sasani, E. Bousquet, Y. M. Blanter, R. V. Mikhaylovskiy, A. V. Kimel, and A. D. Caviglia, Ultrafast control of magnetic interactions via light-driven phonons, Nat. Mater. 20, 607 (2021). * Disa _et al._ [2020] A. S. Disa, M. Fechner, T. F. Nova, B. Liu, M. Först, D. Prabhakaran, P. G. Radaelli, and A. Cavalleri, Polarizing an antiferromagnet by optical engineering of the crystal field, Nat. Phys. 16, 937 (2020). * Sachdev [2008] S. Sachdev, Quantum magnetism and criticality, Nat. Phys. 4, 173 (2008). * Savary and Balents [2016] L. Savary and L. Balents, Quantum spin liquids: a review, Rep. Prog. Phys. 80, 016502 (2016). * Broholm _et al._ [2020] C. Broholm, R. J. Cava, S. A. Kivelson, D. G. Nocera, M. R. Norman, and T. Senthil, Quantum spin liquids, Science 367 (2020). * Miyahara and Ueda [2003] S. Miyahara and K. Ueda, Theory of the orthogonal dimer Heisenberg spin model for ${\mathrm{SrCu}}_{2}({\mathrm{BO}}_{3}{)}_{2}$, J. Phys. Condens. Matter 15, R327 (2003). * Anderson [1973] P. W. Anderson, Resonating valence bonds: A new kind of insulator?, Mat. Res. Bull. 8, 153 (1973). * Sriram Shastry and Sutherland [1981] B. Sriram Shastry and B. Sutherland, Exact ground state of a quantum mechanical antiferromagnet, Physica B+C 108, 1069 (1981). * Corboz and Mila [2013] P. Corboz and F. Mila, Tensor network study of the Shastry-Sutherland model in zero magnetic field, Phys. Rev. B 87, 115144 (2013). * Zayed _et al._ [2017] M. E. Zayed, C. Rüegg, J. Larrea J., A. M. Läuchli, C. Panagopoulos, S. S. Saxena, M. Ellerby, D. F. McMorrow, T. Strässle, S. Klotz, G. Hamel, R. A. Sadykov, V. Pomjakushin, M. Boehm, M. Jiménez-Ruiz, A. Schneidewind, E. Pomjakushina, M. Stingaciu, K. Conder, and H. M. Rønnow, 4-spin plaquette singlet state in the Shastry-Sutherland compound ${\mathrm{SrCu}}_{2}({\mathrm{BO}}_{3}{)}_{2}$, Nat. Phys. 13, 962 (2017). * Guo _et al._ [2020] J. Guo, G. Sun, B. Zhao, L. Wang, W. Hong, V. A. Sidorov, N. Ma, Q. Wu, S. Li, Z. Y. Meng, A. W. Sandvik, and L. Sun, Quantum phases of SrCu2(BO3)2 from high-pressure thermodynamics, Phys. Rev. Lett. 124, 206602 (2020). * Larrea Jiménez _et al._ [2021] J. Larrea Jiménez, S. P. G. Crone, E. Fogh, M. E. Zayed, R. Lortz, E. Pomjakushina, K. Conder, A. M. Läuchli, L. Weber, S. Wessel, A. Honecker, B. Normand, C. Rüegg, P. Corboz, H. M. Rønnow, and F. Mila, A quantum magnetic analogue to the critical point of water, Nature 592, 370 (2021). * Kageyama _et al._ [1999] H. Kageyama, K. Yoshimura, R. Stern, N. V. Mushnikov, K. Onizuka, M. Kato, K. Kosuge, C. P. Slichter, T. Goto, and Y. Ueda, Exact dimer ground state and quantized magnetization plateaus in the two-dimensional spin system SrCu2(BO3)2, Phys. Rev. Lett. 82, 3168 (1999). * Takigawa _et al._ [2013] M. Takigawa, M. Horvatić, T. Waki, S. Krämer, C. Berthier, F. Lévy-Bertrand, I. Sheikin, H. Kageyama, Y. Ueda, and F. Mila, Incomplete Devil’s Staircase in the Magnetization Curve of ${\mathrm{SrCu}}_{2}({\mathrm{BO}}_{3}{)}_{2}$, Phys. Rev. Lett. 110, 067210 (2013). * Haravifard _et al._ [2016] S. Haravifard, D. Graf, A. E. Feiguin, C. D. Batista, J. C. Lang, D. M. Silevitch, G. Srajer, B. D. Gaulin, H. A. Dabkowska, and T. F. Rosenbaum, Crystallization of spin superlattices with pressure and field in the layered magnet SrCu2(BO3)2, Nat. Commun. 7, 11956 (2016). * Rüegg _et al._ [2008] C. Rüegg, B. Normand, M. Matsumoto, A. Furrer, D. F. McMorrow, K. W. Krämer, H. U. Güdel, S. N. Gvasaliya, H. Mutka, and M. Boehm, Quantum Magnets under Pressure: Controlling Elementary Excitations in TlCuCl3, Phys. Rev. Lett. 100, 205701 (2008). * Merchant _et al._ [2014] P. Merchant, B. Normand, K. W. Krämer, M. Boehm, D. F. McMorrow, and C. Rüegg, Quantum and classical criticality in a dimerized quantum antiferromagnet, Nat. Phys. 10, 373 (2014). * Uhlarz _et al._ [2004] M. Uhlarz, C. Pfleiderer, and S. M. Hayden, Quantum Phase Transitions in the Itinerant Ferromagnet ZrZn2, Phys. Rev. Lett. 93, 256404 (2004). * Nojiri _et al._ [2003] H. Nojiri, H. Kageyama, Y. Ueda, and M. Motokawa, ESR Study on the Excited State Energy Spectrum of ${\mathrm{SrCu}}_{2}({\mathrm{BO}}_{3}{)}_{2}$ \- a central role of multiple-triplet bound states -, J. Phys. Soc. Jpn. 72, 3243 (2003). * Gaulin _et al._ [2004] B. D. Gaulin, S. H. Lee, S. Haravifard, J. P. Castellan, A. J. Berlinsky, H. A. Dabkowska, Y. Qiu, and J. R. D. Copley, High-Resolution Study of Spin Excitations in the Singlet Ground State of SrCu2(BO3)2, Phys. Rev. Lett. 93, 267202 (2004). * Lemmens _et al._ [2000] P. Lemmens, M. Grove, M. Fischer, G. Güntherodt, V. N. Kotov, H. Kageyama, K. Onizuka, and Y. Ueda, Collective Singlet Excitations and Evolution of Raman Spectral Weights in the 2D Spin Dimer Compound ${\mathrm{SrCu}}_{2}({\mathrm{BO}}_{3}{)}_{2}$, Phys. Rev. Lett. 85, 2605 (2000). * Gozar _et al._ [2005] A. Gozar, B. S. Dennis, H. Kageyama, and G. Blumberg, Symmetry and light coupling to phononic and collective magnetic excitations in ${\mathrm{SrCu}}_{2}({\mathrm{BO}}_{3}{)}_{2}$, Phys. Rev. B 72, 064405 (2005). * McClarty _et al._ [2017] P. A. McClarty, F. Krüger, T. Guidi, S. F. Parker, K. Refson, A. W. Parker, D. Prabhakaran, and R. Coldea, Topological triplon modes and bound states in a Shastry-Sutherland magnet, Nat. Phys. 13, 736 (2017). * Rõõm _et al._ [2000] T. Rõõm, U. Nagel, E. Lippmaa, H. Kageyama, K. Onizuka, and Y. Ueda, Far-infrared study of the two-dimensional dimer spin system ${\mathrm{SrCu}}_{2}({\mathrm{BO}}_{3}{)}_{2}$, Phys. Rev. B 61, 14342 (2000). * Radtke _et al._ [2015] G. Radtke, A. Saúl, H. A. Dabkowska, M. B. Salamon, and M. Jaime, Magnetic nanopantograph in the ${\mathrm{SrCu}}_{2}({\mathrm{BO}}_{3}{)}_{2}$ Shastry–Sutherland lattice, Proc. Natl. Acad. Sci. 112, 1971 (2015). * Bettler _et al._ [2020] S. Bettler, L. Stoppel, Z. Yan, S. Gvasaliya, and A. Zheludev, Sign switching of dimer correlations in ${\mathrm{SrCu}}_{2}({\mathrm{BO}}_{3}{)}_{2}$ under hydrostatic pressure, Phys. Rev. Res. 2, 012010 (2020). * Badrtdinov _et al._ [2020] D. I. Badrtdinov, A. A. Tsirlin, V. V. Mazurenko, and F. Mila, ${\mathrm{SrCu}}_{2}({\mathrm{BO}}_{3}{)}_{2}$ under pressure: A first-principles study, Phys. Rev. B 101, 224424 (2020). * Vicario _et al._ [2020] C. Vicario, A. Trisorio, S. Allenspach, C. Rüegg, and F. Giorgianni, Narrow-band and tunable intense terahertz pulses for mode-selective coherent phonon excitation, Appl. Phys. Lett. 117, 101101 (2020). * [41] Details are provided in the Supplemental Material, which includes Refs. [54, 55, 56, 57, 58, 59] . * Homes _et al._ [2009] C. C. Homes, S. V. Dordevic, A. Gozar, G. Blumberg, T. Rõõm, D. Hüvonen, U. Nagel, A. D. LaForge, D. N. Basov, and H. Kageyama, Infrared spectra of the low-dimensional quantum magnet ${\mathrm{SrCu}}_{2}({\mathrm{BO}}_{3}{)}_{2}$: Measurements and ab initio calculations, Phys. Rev. B 79, 125101 (2009). * Liu _et al._ [2017] B. Liu, H. Bromberger, A. Cartella, T. Gebert, M. Först, and A. Cavalleri, Generation of narrowband, high-intensity, carrier-envelope phase-stable pulses tunable between 4 and 18 THz, Opt. Lett. 42, 129 (2017). * Agranat _et al._ [2018] M. B. Agranat, O. V. Chefonov, A. V. Ovchinnikov, S. I. Ashitkov, V. E. Fortov, and P. S. Kondratenko, Damage in a Thin Metal Film by High-Power Terahertz Radiation, Phys. Rev. Lett. 120, 085704 (2018). * Zayed _et al._ [2014] M. E. Zayed, C. Rüegg, T. Strässle, U. Stuhr, B. Roessli, M. Ay, J. Mesot, P. Link, E. Pomjakushina, M. Stingaciu, K. Conder, and H. M. Rønnow, Correlated Decay of Triplet Excitations in the Shastry-Sutherland Compound ${\mathrm{SrCu}}_{2}({\mathrm{BO}}_{3}{)}_{2}$, Phys. Rev. Lett. 113, 067201 (2014). * Schlauderer _et al._ [2019] S. Schlauderer, C. Lange, S. Baierl, T. Ebnet, C. P. Schmid, D. C. Valovcin, A. K. Zvezdin, A. V. Kimel, R. V. Mikhaylovskiy, and R. Huber, Temporal and spectral fingerprints of ultrafast all-coherent spin switching, Nature 569, 383 (2019). * Yamakawa _et al._ [2017] H. Yamakawa, T. Miyamoto, T. Morimoto, T. Terashige, H. Yada, N. Kida, M. Suda, H. M. Yamamoto, R. Kato, K. Miyagawa, K. Kanoda, and H. Okamoto, Mott transition by an impulsive dielectric breakdown, Nat. Mater. 16, 1100 (2017). * Tarekegne _et al._ [2017] A. T. Tarekegne, H. Hirori, K. Tanaka, K. Iwaszczuk, and P. U. Jepsen, Impact ionization dynamics in silicon by MV/cm THz fields, New J. Phys. 19, 123018 (2017). * Kozina _et al._ [2019] M. Kozina, M. Fechner, P. Marsik, T. van Driel, J. M. Glownia, C. Bernhard, M. Radovic, D. Zhu, S. Bonetti, U. Staub, and M. C. Hoffmann, Terahertz-driven phonon upconversion in SrTiO3, Nat. Phys. 15, 387 (2019). * Melnikov _et al._ [2018] A. A. Melnikov, K. N. Boldyrev, Y. G. Selivanov, V. P. Martovitskii, S. V. Chekalin, and E. A. Ryabov, Coherent phonons in a ${\mathrm{Bi}}_{2}{\mathrm{Se}}_{3}$ film generated by an intense single-cycle THz pulse, Phys. Rev. B 97, 214304 (2018). * Giannozzi _et al._ [2017] P. Giannozzi, O. Andreussi, T. Brumme, O. Bunau, M. B. Nardelli, M. Calandra, R. Car, C. Cavazzoni, D. Ceresoli, M. Cococcioni, N. Colonna, I. Carnimeo, A. D. Corso, S. de Gironcoli, P. Delugas, R. A. DiStasio, A. Ferretti, A. Floris, G. Fratesi, G. Fugallo, R. Gebauer, U. Gerstmann, F. Giustino, T. Gorni, J. Jia, M. Kawamura, H.-Y. Ko, A. Kokalj, E. Küçükbenli, M. Lazzeri, M. Marsili, N. Marzari, F. Mauri, N. L. Nguyen, H.-V. Nguyen, A. Otero de la Roza, L. Paulatto, S. Poncé, D. Rocca, R. Sabatini, B. Santra, M. Schlipf, A. P. Seitsonen, A. Smogunov, I. Timrov, T. Thonhauser, P. Umari, N. Vast, X. Wu, and S. Baroni, Advanced capabilities for materials modelling with Quantum ESPRESSO, J. Phys. Condens. Matter 29, 465901 (2017). * Perdew _et al._ [1996] J. P. Perdew, K. Burke, and M. Ernzerhof, Generalized Gradient Approximation Made Simple, Phys. Rev. Lett. 77, 3865 (1996). * Radtke _et al._ [2008] G. Radtke, A. Saúl, H. A. Dabkowska, B. D. Gaulin, and G. A. Botton, Electronic structure of the quasi-two-dimensional spin-gap system $\mathrm{Sr}{\mathrm{Cu}}_{2}{(\mathrm{B}{\mathrm{O}}_{3})}_{2}$: Experiment and theory, Phys. Rev. B 77, 125130 (2008). * Giorgianni _et al._ [2019] F. Giorgianni, J. Sakai, and S. Lupi, Overcoming the thermal regime for the electric-field driven Mott transition in vanadium sesquioxide, Nat. Commun. 10, 1159 (2019). * Roskos _et al._ [2007] H. G. Roskos, M. D. Thomson, M. Kreß, and T. Löffler, Broadband THz emission from gas plasmas induced by femtosecond optical pulses: From fundamentals to applications, Laser & Photonics Rev. 1, 349 (2007). * Juraschek and Maehrlein [2018] D. M. Juraschek and S. F. Maehrlein, Sum-frequency ionic Raman scattering, Phys. Rev. B 97, 174302 (2018). * Prandini _et al._ [2018] G. Prandini, A. Marrazzo, I. E. Castelli, N. Mounet, and N. Marzari, Precision and efficiency in solid-state pseudopotential calculations, npj Comput. Mater. 4, 72 (2018). * Prandini _et al._ [2020] G. Prandini, A. Marrazzo, I. E. Castelli, N. Mounet, and N. Marzari, A Standard Solid State Pseudopotentials (SSSP) library optimized for precision and efficiency, Mater. Cloud Arch. , 2018.0001/v4 (2020). * Vecchini _et al._ [2009] C. Vecchini, O. Adamopoulos, L. C. Chapon, A. Lappas, H. Kageyama, Y. Ueda, and A. Zorko, Structural distortions in the spin-gap regime of the quantum antiferromagnet SrCu2(BO3)2, J. Solid State Chem. 182, 3275 (2009). Supplemental Material for “Ultrafast Frustration-Breaking and Magnetophononic Driving of Singlet Excitations in a Quantum Magnet” F. Giorgianni, B. Wehinger, S. Allenspach, N. Colonna, C. Vicario, P. Puphal, E. Pomjakushina, B. Normand, and Ch. Rüegg ## S1 Experiment Single crystals of SrCu2(BO3)2 were grown using an optical floating-zone furnace (FZ-T-10000-H-IV-VP-PC, Crystal System Corp., Japan) with four 300 W halogen lamps as the heat source. The growth rate was 0.25 mm/h, with both feeding and seeding rods being rotated at approximately 15 rpm in opposite directions to ensure the homogeneity of the liquid; an argon atmosphere with 20% oxygen was maintained at 5 bar during growth. The high structural quality and orientation of the resulting single crystal were confirmed by x-ray diffraction and the high magnetic quality (absence of impurities) by susceptibility measurements. The crystal was cut using a diamond wire saw and cleaved along the $a^{\prime}b^{\prime}$-plane to give a sample of dimensions 1$\times$3$\times$0.24 mm3 that was used for the experiments. Figure S1: Experimental THz-pump, optical-probe set-up and measured THz pump parameters. (a) Schematic representation showing the Optical Parametric Amplifier (OPA), parabolic mirrors (PM1-PM3), delay line (DL), quarter-wave plate (QWP), Wollaston prism (WP), and detectors (D1-D2). (b) THz beam profile at the sample position measured by the THz camera. Beam waists obtained by Gaussian fitting are respectively $w_{x}=88$ $\mu$m and $w_{y}=96$ $\mu$m (average waist $w=92$ $\mu$m). (c) Temporal THz intensity waveform obtained as the square of the electric field measured by electro-optic sampling, using a Gaussian-envelope fit to determine the pulse duration. The THz-pump and optical-probe set-up is shown in Fig. S1(a) [40]. The output of a 20 mJ, 55 fs, 800 nm Ti:sapphire laser was used to drive an optical parametric amplifier (OPA), which provides ultrashort, multi-mJ pulses [54]. Single-cycle THz pulses were generated by optical rectification, using a crystal of DAST (4-N,N-dimethylamino-4’-N’-methyl-stilbazolium tosylate, from Rainbow Photonics), of the OPA signal at 1.5 $\mu$m. The OPA pulse energy of 3.2 mJ gave an NIR pump fluence at the crystal surface of approximately 5 mJcm-2. Three low-pass filters, two with a 20 THz cut-off frequency and one with 10 THz, were used after the THz generation step to block the residual OPA beam, and gave an extinction ratio for the pump in excess of 105. For the pump pulses, an additional high-pass filter with a cut-off of 4.2 THz was used to drive the primary IR-active phonons while ensuring a negligible spectral weight at the frequency of the leading magnetic modes [Fig. 3(a) of the main text]. To select the four different pump pulses shown in Fig. 4(c) of the main text, we used respectively a 2 THz low-pass filter, a 3 THz band-pass filter, a 4.2 THz high-pass filter coupled with a 6 THz low-pass filter, and a 6 THz band-pass filter. For the measurement of properties dependent on the pump strength, the THz electric field was tuned by three wire-grid polarizers. Peak electric fields were reached by tight focusing of the THz beam using three parabolic mirrors [54]. To estimate the electric-field strength of the pump pulses shown in Fig. 1 of the main text, we apply the formula [55] $E_{\rm THz}=\sqrt{\frac{z_{0}E_{p}4\sqrt{\ln 2}}{\pi\sqrt{\pi}w^{2}\tau_{\rm FWHM}}},$ (S1) where $z_{0}$ is the vacuum impedance and we measured (i) the THz energy per pulse, $E_{p}=0.8$ $\mu$J, using a calibrated THz energymeter (Gentec THZ12D3S-VP-D0); (ii) the beam waist, $w=92$ $\mu$m, obtained by a Gaussian fit of the beam profile [Fig. S1(b)], which was measured with a micro- bolometric THz camera (NEC IRV-T0830); (iii) the pulse duration, $\tau_{\rm FWHM}=0.21$ ps, obtained from the FWHM of the Gaussian envelope fitting the temporal intensity waveform of the THz pump [Fig. S1(c)], which was taken as the square of the electric field measured at the sample position by electro- optic sampling in a 200 $\mu$m-thick (110) GaP crystal with a 50 fs, 800 nm gating pulse obtained as a fraction of the Ti:sapphire beam. Our estimated electric-field strength, $E_{\rm THz}=3.2$ MVcm-1, is similar to other values in recent literature [43, 44]. Figure S2: Time-frequency THz-driven dynamics. (a) THz pump electric-field amplitude. (b) Pump-induced polarization modulation in SrCu2(BO3)2 at 4 K. In panel (a) the color contours around $t=0$ are based on the FWHM isoline of the amplitude. In panel (b) the blue circles indicate the frequencies of the $E$-symmetric IR-active phonon modes that are pumped directly, where our measured values of 3.80, 4.27, 6.75, and 7.00 THz match within the experimental error with those of Ref. [42], whereas the 4.60 THz mode we measure is tabulated there as 4.75 THz. Green circles show the frequencies of $B_{1}$-symmetric modes at 8.57 THz and 11.7 THz [42] that are not IR-active but are excited by sum-frequency processes, which are expected to involve respectively the $E$-symmetric modes indicated with diamonds (3.80 THz and 4.60 THz) and with asterisks (4.60 THz and 7.00 THz). The pink diamond marks the frequency of the TBS, $\omega_{\rm TBS}=0.87$ THz (taken from Refs. [33, 34]); this mode is not electromagnetically active and is driven by the nonlinear (difference-frequency) spin-phonon coupling discussed in the main text. (c-f) Temporal dynamics, obtained by numerical band-pass filters, of the normalized polarization rotations measured for some of the primary excited modes. The gating pulse was used to probe the ultrafast pump-induced polarization dynamics of the sample, which were measured by splitting the probe beam into two orthogonal components with a Wollaston prism. The THz electric field of the pump was polarized in the sample plane along the direction perpendicular to the optical table. The polarization of the probe relative to the sample was offset by 45∘ from the pump polarization. All measurements were performed in a He cryostat, which allowed a minimum background sample temperature of 3.5 K to be reached. For a quantitative analysis of the effect of $E_{\rm THz}$ on the sample, the peak polarization, $P_{m}$, induced by the THz pulse resonant with a generic phonon mode, $m$, is [3] $P_{m}=\frac{\sigma_{1}(\omega_{m})}{\omega_{m}}\tilde{E}_{\rm THz},$ (S2) where $\omega_{m}$ is the angular frequency and $\sigma_{1}(\omega_{m})$ the optical conductivity of the driven phonon. The polarization arises from the modulation of the dipole moment of the crystal, $P_{m}=n_{d}\delta_{m}\mu_{m}$, where $\mu_{m}=e|\sum_{im}\hat{u}_{im}Z_{\rm eff}^{i}|$ is the magnitude of the charge displacement due to phonon mode $m$, $\hat{u}_{im}$ is the normalized vector of displacements of each atom, $i$, in phonon mode $m$, $Z_{\rm eff}^{i}$ are the Born effective charges of each atom, $n_{d}$ is the number of dipoles per unit volume, and $\delta_{m}$ is the maximum displacement coordinate of the phonon. The driving field, ${\tilde{E}}_{\rm THz}=\beta_{m}E_{\rm THz}$ in Eq. (S2), is the effective electric field inside the sample acting on the phonon [55], where $\beta_{m}=1-R_{m}$ is determined by the reflectivity at frequency $\omega_{m}$. To illustrate the estimation of the $\delta_{m}$ values induced by the THz electric field, we take the example of the 4.60 THz phonon mode, which is the strongest single feature of the driven response (Fig. 3 of the main text) and is labelled $m=b$ in Sec. IV. Thus we use $\omega_{b}=2\pi 4.6\times 10^{12}$ s-1, $\sigma_{1}(\omega_{b})=137$ $\Omega^{-1}$ cm-1 from Ref. [42], $n_{d}=V^{-1}$ with $V=5.71\times 10^{-22}$ cm-3 the volume of the unit cell, and the value $\mu_{b}=0.6e$ taken from our DFT calculations (below). The bandwidth of the external THz field, approximately 2.7 THz FWHM [Fig. 3(a) of the main text], is large compared to the linewidth of the phonon mode [0.1 THz FWHM, Fig. 4(c)], and by accounting for the reflection of the external field [Fig. 4(c)] we estimate $\beta_{b}\simeq 0.02$. Thus we deduce a maximum displacement of $\delta_{b}=0.17$ Å, which is comparable to that estimated in SrTiO3 [49]. ## S2 Time-frequency analysis of THz-driven dynamics and mode symmetries Figure S2 reports the temporal profile and frequency content measured for the THz pump pulse [Fig. S2(a)] and the resulting polarization rotations induced in the SrCu2(BO3)2 sample at 4 K [Fig. S2(b)]. The spectral decomposition was computed using a Hamming sliding-window fast Fourier transform. It is clear that the distribution of frequencies in the THz pump pulse [Fig. S2(a)] causes not only a direct resonant driving of $E$-symmetric, IR-active phonons but also a less direct, nonlinear driving of $B_{1}$-symmetric, Raman-active phonons at higher frequencies [Figs. S2(b-c)]; for this the low-temperature point group, $D_{2d}$ (space group I$\bar{4}$2m), permits the excitation of $B_{1}(\subset E\times E)$ phonons by the composition of two $E$-symmetric phonons. Similarly, the magnetic excitation from the singlet ground state to the TBS mode can be interpreted as a resonance driven by the difference- frequency harmonic components of the $E$-symmetric phonons shown in Figs. S2(d-e). The TBS [Fig. S2(f)] also has $B_{1}$ symmetry, as determined by Raman spectroscopy [33, 34], indicating that its driving relies on the same symmetry composition ($B_{1}\subset E\times E$). We comment that, because both the singlet ground state and the TBS are nonmagnetic ($S=0$), the origin of the polarization rotation measured in SrCu2(BO3)2 must lie in lattice (birefringence) effects, which are clearly enhanced by the spin-lattice coupling. Figure S3: Sum-frequency ionic Raman excitation: experimental evidence and model. (a) Normalized experimental THz driving field (black) and normalized time-dependent THz-driven atomic displacements, calculated from Eqs. (S5) and (S6), of the IR-active phonon modes $q_{b}$ at $\omega_{b}=4.60$ THz (blue) and $q_{c}$ at $\omega_{c}=7.0$ THz (red). (b) Time-dependent phonon-driven atomic displacement, $q_{R}$, of the Raman-active phonon mode, obtained from Eq. (S7) and compared to the normalized polarization rotation (Fig. S2) measured by using a numerical band-pass filter to remove the components of the lower-frequency modes. (c) Fourier-transformed amplitude of $q_{R}$ and of the polarization rotation shown in panel (b). (d) Measured amplitude of $q_{R}$ compared with the quadratic electric-field dependence given by Eq. (S7). ## S3 THz-driven nonlinear phonon dynamics As summarized in the main text, the resonant dynamic distortion of the lattice in response to intense and coherent THz excitation creates nonlinear channels for the transfer of energy to both magnetic and phononic modes. The phenomenon of sum-frequency ionic Raman scattering has been investigated only recently in both experiment [50] and theory [56], and our results include its clearest observation to date. To describe the driven nonlinear lattice dynamics that we observe, we consider two IR-active phonons with normal coordinates $q_{b}$ and $q_{c}$, with corresponding frequencies $\omega_{b}$ and $\omega_{c}$, and a Raman-active phonon with coordinate $q_{R}$ and frequency $\omega_{R}$. By discarding terms quadratic in $q_{R}$, on the assumption that the amplitude of the Raman mode will be much smaller than that of the IR modes driven by the THz pump ($|q_{b,c}|\gg|q_{R}|$), the minimal lattice potential to cubic order (i.e. lowest anharmonic order) is $\displaystyle V(q_{b},q_{c},q_{R})$ $\displaystyle=$ $\displaystyle{\textstyle\frac{1}{2}}\omega_{b}^{2}q_{b}^{2}+{\textstyle\frac{1}{2}}\omega_{c}^{2}q_{c}^{2}+{\textstyle\frac{1}{2}}\omega_{R}^{2}q_{R}^{2}$ $\displaystyle\;\;\;\;+[c_{bb,R}q_{b}^{2}+c_{bc,R}q_{b}q_{c}+c_{cc,R}q_{c}^{2}]q_{R},$ where the $c$ coefficients specify the leading nonlinear coupling terms between the IR and Raman phonons. The equation of motion for a generic THz- driven, IR-active phonon mode, $m$, takes the form ${\ddot{q}}_{m}+\gamma_{m}{\dot{q}}_{m}=-\frac{\partial[V-B_{m}q_{m}E_{\rm THz}(t)]}{\partial q_{m}},$ (S4) with $\gamma_{m}$ the damping rate and $B_{m}$ the dipole coupling constant introduced in Eq. (5) of the main text. For a phonon that is Raman-active but not IR-active, the driving term is only $\partial V/\partial q_{R}$. The coherent THz-driven lattice dynamics are then described in the time domain from Eq. (S3), to leading order in $q_{R}$, by the coupled differential equations $\displaystyle\\!\\!\\!\\!\\!\\!\\!\\!{\ddot{q}}_{b}+\gamma_{b}{\dot{q}}_{b}+\omega_{b}^{2}q_{b}$ $\displaystyle=$ $\displaystyle-B_{b}E_{\rm THz}(t),$ (S5) $\displaystyle\\!\\!\\!\\!\\!\\!\\!\\!{\ddot{q}}_{c}+\gamma_{c}{\dot{q}}_{c}+\omega_{c}^{2}q_{c}$ $\displaystyle=$ $\displaystyle-B_{c}E_{\rm THz}(t),$ (S6) $\displaystyle\\!\\!\\!\\!\\!\\!\\!\\!{\ddot{q}}_{R}\\!+\\!\gamma_{R}{\dot{q}}_{R}\\!+\\!\omega_{R}^{2}q_{R}\\!$ $\displaystyle=$ $\displaystyle\\!-[c_{bb,R}q_{b}^{2}\\!+\\!c_{bc,R}q_{b}q_{c}\\!+\\!c_{cc,R}q_{c}^{2}].$ (S7) Clearly Eq. (S7) for the Raman phonon describes a damped harmonic oscillator driven by terms quadratic in the IR-active phonon displacements, resulting in sum-frequency excitation processes. The effectiveness of this driving then depends on the proximity of the combinations $2\omega_{b}$, $2\omega_{c}$, and $\omega_{b}+\omega_{c}$ (and indeed $\omega_{c}-\omega_{b}$) to $\omega_{R}$. The clearest example in Fig. S2(b) is the $B_{1}$ Raman mode with frequency $\omega_{R}=11.7$ THz and damping $\gamma_{R}=0.2$ THz [Fig. S2(c)]. The two THz-pumped $E$-symmetric phonons marked by the asterisks have parameters $\omega_{b}=4.60$ THz, $\gamma_{b}=0.03$ THz and $\omega_{c}=7.00$ THz, $\gamma_{c}=0.10$ THz. Figure S3(a) shows the temporal evolution of $q_{b}$ and $q_{c}$ calculated from Eqs. (S5-S7) using the experimental THz pump field. As expected from the respective phonon frequencies, the Raman coordinate, $q_{R}$, is dominated by coherent oscillations at frequency $\omega_{b}+\omega_{c}$ [Fig. S3(b)], confirming the sum-frequency ionic Raman mechanism. The phenomenon is equally clear in the frequency domain, where the Fourier amplitudes of the measured and calculated dynamics are compared in Fig. S3(c). Because the amplitude of $q_{R}$ varies linearly with each individual amplitude $q_{b}$ or $q_{c}$ [Eq. (S7)], it should scale quadratically with the THz field strength, as Fig. S3(d) confirms. We note again that all the phonon frequencies used in our analysis are the experimental values obtained from our pump-probe measurement, which with one exception match the FTIR spectroscopy measurements of Ref. [42] to within the experimental uncertainties; as a result of this match, we took all the phonon damping parameters ($\gamma_{m}$) from Ref. [42]. The exception was the phonon frequency $\omega_{b}=4.60$ THz, found at 4.75 THz by FTIR spectroscopy, where our measured value gave a significantly better account of the $B_{1}$ Raman frequency we observed at 11.58 THz. A similar analysis can be performed for the $B_{1}$ phonon mode at 8.57 THz, with driving by $\omega_{b}$ combined with the IR-active phonon at $\omega_{a}=3.80$ THz [both marked by blue diamonds in Fig. S2(b)]. Figure S4: Pump polarity inversion. (a) Representation of two THz pump pulses with inverted electric-field polarities, $E^{(+)}$ and $E^{(-)}$. (b-e) Comparison of the dynamics of the magnetic and lattice modes for these two pump polarities. ## S4 Dependence on polarity of the THz pump field As Fig. 4(b) of the main text shows, the amplitude of the phonon-driven TBS excitation is proportional to the square of the driving electric field of the THz pump pulse. A further important test of our observations and modelling is therefore to invert the polarity of this field, as represented in Fig. S4(a). To implement this sign inversion experimentally, we rotate the crystal for THz generation by 180∘ and compare the driven dynamics of the polarization- rotation signals. The results in the time domain are shown in Figs. S4(b-e). Because the $E$-symmetric (IR-active) phonon modes are excited directly, the carrier-envelope phase of the THz pump is imprinted onto them and Figs. S4(c-d) confirm a phase shift of 180∘. By contrast, the dynamics of the TBS [Fig. S4(b)] are invariant on changing the electric-field polarity, fully consistent with a quadratic driving mechanism. The dynamics of the Raman-active phonon at 11.6 THz, presented in Sec. S3, are similarly insensitive to the change of polarity [Fig. S4(e)], as expected of a sum-frequency process. We note that perfect inversion of the IR phonon signals, and perfect overlap of the TBS and Raman phonon signals, are actually obtained for a time shift of $-20$ fs in the $E^{(-)}$ signal. This can be attributed to the fact that rotation of the THz generation crystal may produce a small time-delay in the event of an inhomogeneity in its thickness (20 fs corresponds to approximately 6 $\mu$m in vacuum). ## S5 Phonon modulation of magnetic interactions We have performed a hierarchy of DFT calculations in order to obtain quantitative estimates of the effects of the driven IR phonons on the magnetic system in SrCu2(BO3)2. ### S5.1 DFT calculations at equilibrium First we used Quantum Espresso [51] to compute the total lattice and magnetic energies of SrCu2(BO3)2 at equilibrium. For these calculations we worked in the structural unit cell of the system, which is tetragonal and contains 44 atoms (of which 8 are Cu atoms) in two “Shastry-Sutherland” layers. The electron-ion interactions were modelled using pseudopotentials from the curated SSSP library [57, 58]. The plane-wave cut-off was set at 750 eV (6000 eV for the charge density) and for sampling of the Brillouin zone we used a 6$\times$6$\times$6 $k$-point grid. Structural relaxation was continued until each component of the force acting at every atom was less than 0.0025 eV/Å and the pressure (defined as ${\textstyle\frac{1}{3}}Tr[\overleftrightarrow{\sigma}]$, with $\overleftrightarrow{\sigma}$ being the stress tensor) was below 0.5 kbar. These parameters ensure errors smaller than 1 meV/atom within the total energy and, more importantly, smaller than 1 K on the magnetic interactions (which depend only on total-energy differences, and thus converge faster and better than do the total values). By comparing the energies of different spin configurations computed with the same relaxed structure, we fixed the value of the effective Hubbard term to $U=11.4$ eV by reproducing the interaction parameters $J=84.0$ K and $J^{\prime}=49.7$ K [53]. The lattice parameters we obtain for this value of $U$, the $T=0$ DFT + $U$ structure, agree with the measured low-temperature structure of SrCu2(BO3)2 [59] to within 1% for the $a$ and $b$ axes and 3% for the $c$ axis. The Born effective charges, $Z_{\rm eff}^{i}$, used to estimate the maximum displacements, $\delta_{m}$, in Sec. III of the main text, were computed for the system in the AFM configuration using the PHONON module of Quantum Espresso and the PBE functional [52]. Figure S5: Frozen-phonon density-functional theory. Symmetry-related magnetic interaction parameters in the spin network obtained for an arbitrary “frozen” configuration of the phonons in SrCu2(BO3)2. Red dots represent the $S=1/2$ spins at the Cu2+ sites in SrCu2(BO3)2. For every normal mode of the lattice, the system has two different values of the intradimer interaction, $J_{1}$ and $J_{2}$ (red lines), and eight different interdimer interaction terms, $J_{1}^{\prime}$, $J_{2}^{\prime}$, …$J_{8}^{\prime}$ (black). Shown are the 2$\times$2 supercells in each of the two layers of the structural unit cell (a total of 32 magnetic sites) required to obtain enough independent spin configurations to determine all 11 unknown parameters. ### S5.2 DFT calculations for frozen phonons In order to extend our DFT calculations to include the nonequilibrium atomic configurations in the presence of lattice excitations, we first made a phonon symmetry analysis of the different interatomic paths. From this we deduced that the most general magnetic state in the presence of a phonon distortion $q_{t}$ is characterized by ten different interaction parameters in the unit cell, two values of $J(q_{t})$ and eight of $J^{\prime}(q_{t})$, as shown in Fig. S5. To compute this number of unknown parameters, it is necessary to work on a 2$\times$2$\times$1 supercell, meaning to use four of the basic magnetic unit cells of SrCu2(BO3)2. In these supercell calculations we reduced the $k$-point grid to 2$\times$2$\times$3, and verified that this sampling density provided essentially equivalent results. Table S1: Eleven independent spin configurations used for one determination of the magnetic interaction parameters. Columns from 1 to 16 are the magnetic sites shown in Layer 1 of Fig. S5. 1 denotes an up-spin ($S_{z}=1/2$) and $-1$ a down-spin ($S_{z}=-1/2$). Config. | Magnetic site ---|--- # | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 1 | $\;-1\;$ | $\;-1\;$ | 1 | 1 | $\;-1\;$ | $\;-1\;$ | 1 | 1 | 1 | 1 | $\;-1\;$ | $\;-1\;$ | 1 | 1 | $\;-1\;$ | $\;-1\;$ 2 | $-1$ | 1 | $\;-1\;$ | $\;-1\;$ | 1 | 1 | $\;-1\;$ | $\;-1\;$ | $\;-1\;$ | $\;-1\;$ | 1 | 1 | $\;-1\;$ | $\;-1\;$ | 1 | 1 3 | 1 | 1 | $-1$ | $-1$ | 1 | 1 | $-1$ | $-1$ | 1 | $-1$ | 1 | 1 | $-1$ | $-1$ | 1 | 1 4 | $-1$ | 1 | $-1$ | $-1$ | 1 | $-1$ | $-1$ | 1 | 1 | $-1$ | $-1$ | 1 | $-1$ | 1 | 1 | 1 5 | $-1$ | $-1$ | 1 | 1 | $-1$ | $-1$ | 1 | 1 | $-1$ | $-1$ | 1 | 1 | $-1$ | $-1$ | 1 | 1 6 | $-1$ | $-1$ | $-1$ | 1 | $-1$ | $-1$ | 1 | 1 | $-1$ | $-1$ | 1 | 1 | $-1$ | $-1$ | 1 | 1 7 | $-1$ | $-1$ | 1 | 1 | $-1$ | $-1$ | 1 | 1 | $-1$ | $-1$ | 1 | 1 | $-1$ | 1 | 1 | 1 8 | $-1$ | $-1$ | 1 | 1 | $-1$ | $-1$ | 1 | 1 | $-1$ | $-1$ | 1 | 1 | $-1$ | $-1$ | 1 | $-1$ 9 | $-1$ | $-1$ | 1 | 1 | $-1$ | $-1$ | 1 | 1 | $-1$ | 1 | $-1$ | $-1$ | 1 | $-1$ | $-1$ | 1 10 | $-1$ | $-1$ | 1 | 1 | $-1$ | $-1$ | 1 | 1 | $-1$ | 1 | 1 | $-1$ | 1 | $-1$ | $-1$ | $-1$ 11 | $-1$ | $-1$ | 1 | 1 | $-1$ | $-1$ | 1 | 1 | 1 | $-1$ | 1 | $-1$ | $-1$ | 1 | $-1$ | $-1$ We performed total-energy calculations for the lattice structures obtained by systematic displacement of all atoms according to the normal coordinates of the strongest two phonon modes measured in experiment [Fig. S2(a)], $q_{a}$ at $\omega_{a}=3.80$ THz and $q_{b}$ at $\omega_{b}=4.60$ THz. While it is possible to include many more of the phonons observed in Sec. S2, here we have focused rather on optimizing our treatment of the time and frequency domains. For each distorted structure, we computed the electronic ground state as a function of the displacement amplitudes of the two phonons and for 11 different configurations of up- ($S_{z}=1/2$) and down-oriented spins ($S_{z}=-1/2$), from which we obtained 11 linearly independent equations in order to determine all the interaction parameters in the system. This process is represented in Table S1 for one set of 11 spin configurations in Layer 1 of Fig. S5 and the corresponding equations are $\displaystyle E(1)$ $\displaystyle=$ $\displaystyle E_{0}+4J_{1}+4J_{2}-6J_{1}^{\prime}-2J_{2}^{\prime}-6J_{3}^{\prime}-6J_{4}^{\prime}-6J_{5}^{\prime}-6J_{6}^{\prime}-6J_{7}^{\prime}-2J_{8}^{\prime},$ $\displaystyle E(2)$ $\displaystyle=$ $\displaystyle E_{0}+4J_{1}+4J_{2}-2J_{1}^{\prime}-6J_{2}^{\prime}-6J_{3}^{\prime}-6J_{4}^{\prime}-6J_{5}^{\prime}-6J_{6}^{\prime}-2J_{7}^{\prime}-6J_{8}^{\prime},$ $\displaystyle E(3)$ $\displaystyle=$ $\displaystyle E_{0}+6J_{1}+6J_{2}-4J_{1}^{\prime}-6J_{2}^{\prime}-6J_{3}^{\prime}-8J_{4}^{\prime}-8J_{5}^{\prime}-6J_{6}^{\prime}-6J_{7}^{\prime}-4J_{8}^{\prime},$ $\displaystyle E(4)$ $\displaystyle=$ $\displaystyle E_{0}+2J_{1}+2J_{2}-4J_{1}^{\prime}-6J_{2}^{\prime}-2J_{3}^{\prime}-4J_{4}^{\prime}-4J_{5}^{\prime}-2J_{6}^{\prime}-6J_{7}^{\prime}-4J_{8}^{\prime},$ $\displaystyle E(5)$ $\displaystyle=$ $\displaystyle E_{0}+4J_{1}+4J_{2}-6J_{1}^{\prime}-6J_{2}^{\prime}-6J_{3}^{\prime}-2J_{4}^{\prime}-6J_{5}^{\prime}-2J_{6}^{\prime}-6J_{7}^{\prime}-6J_{8}^{\prime},$ $\displaystyle E(6)$ $\displaystyle=$ $\displaystyle E_{0}+4J_{1}+4J_{2}-6J_{1}^{\prime}-6J_{2}^{\prime}-2J_{3}^{\prime}-6J_{4}^{\prime}-2J_{5}^{\prime}-6J_{6}^{\prime}-6J_{7}^{\prime}-6J_{8}^{\prime},$ (S8) $\displaystyle E(7)$ $\displaystyle=$ $\displaystyle E_{0}+6J_{1}+6J_{2}-8J_{1}^{\prime}-6J_{2}^{\prime}-6J_{3}^{\prime}-4J_{4}^{\prime}-4J_{5}^{\prime}-6J_{6}^{\prime}-6J_{7}^{\prime}-8J_{8}^{\prime},$ $\displaystyle E(8)$ $\displaystyle=$ $\displaystyle E_{0}-2J_{1}-2J_{2}-6J_{2}^{\prime}-6J_{3}^{\prime}-6J_{6}^{\prime}-6J_{7}^{\prime},$ $\displaystyle E(9)$ $\displaystyle=$ $\displaystyle E_{0}-2J_{1}^{\prime}-6J_{2}^{\prime}-6J_{3}^{\prime}+2J_{4}^{\prime}-2J_{5}^{\prime}-6J_{6}^{\prime}-2J_{7}^{\prime}-2J_{8}^{\prime},$ $\displaystyle E(10)$ $\displaystyle=$ $\displaystyle E_{0}-2J_{1}^{\prime}-2J_{2}^{\prime}-6J_{3}^{\prime}-2J_{4}^{\prime}+2J_{5}^{\prime}-6J_{6}^{\prime}-6J_{7}^{\prime}-2J_{8}^{\prime},$ $\displaystyle E(11)$ $\displaystyle=$ $\displaystyle E_{0}-2J_{1}-2J_{2}+2J_{1}^{\prime}-4J_{2}^{\prime}-4J_{3}^{\prime}-2J_{4}^{\prime}-2J_{5}^{\prime}-4J_{6}^{\prime}-4J_{7}^{\prime}-2J_{8}^{\prime}.$ The extraction of the magnetic interaction parameters is by its nature a statistical exercise, because different spin configurations lead to different local spin densities in the DFT wave function, which cause subtle differences in the results for the effective $J$ and $J^{\prime}$ parameters. We benchmark the accuracy of our statistics by testing the magnetic interactions at equilibrium (i.e. $q_{a}$ and $q_{b}=0$) with 100 different spin configurations and performing a least-squares regression analysis. As we show in Table S2, our results are fully consistent with the equilibrium $J$ and $J^{\prime}$ values. The resulting statistical error on $J^{\prime}$ is 0.4 K. $E_{0}$ in Eqs. (S8) is a large constant that captures all of the nonmagnetic contributions to the calculation and cancels from the equations determining the magnetic interaction parameters. Table S2: Magnetic interaction parameters of SrCu2(BO3)2 calculated using the frozen-phonon protocol with all phonon displacements set to zero. Interaction | $J_{1}$ (K) | $J_{2}$ (K) | $J_{1}^{\prime}$ (K) | $J_{2}^{\prime}$ (K) | $J_{3}^{\prime}$ (K) | $J_{4}^{\prime}$ (K) | $J_{5}^{\prime}$ (K) | $J_{6}^{\prime}$ (K) | $J_{7}^{\prime}$ (K) | $J_{8}^{\prime}$ (K) ---|---|---|---|---|---|---|---|---|---|--- Strength | 83.2 | 83.6 | 48.5 | 48.7 | 47.5 | 47.4 | 48.1 | 48.1 | 48.9 | 48.7 Figure S6: Phonon displacement vectors and calculated magnetic interaction parameters in SrCu2(BO3)2. (a) Atomic motions in the normal mode ($q_{a}$) of the lattice at $\omega_{a}=3.80$ THz. (b) As in panel (a) for $q_{b}$ at $\omega_{b}=4.60$ THz. We comment that these phonon modes are not symmetric between the $a^{\prime}$ and $b^{\prime}$ axes [Fig. 1(a) of the main text], but that their doublet counterparts within each $E$-symmetric manifold restore this symmetry. (c-d) Intradimer interactions, $J_{1}$ and $J_{2}$, shown as functions of the phonon displacement amplitudes. (e-f) Interdimer interactions, $J_{1}^{\prime}$, …$J_{8}^{\prime}$. (g-h) Differences, $\Delta J_{12}^{\prime}$, $\Delta J_{34}^{\prime}$, $\Delta J_{56}^{\prime}$, and $\Delta J_{78}^{\prime}$, between pairs of interdimer interaction parameters. The parameters shown in panels (c-f) were obtained using 11 spin configurations and, after verification of their systematic evolution, were centered on the results of Table S2. ### S5.3 DFT calculations with driven phonons To model our experiment, in Figs. S6(a-b) we show the vectors, meaning the ensembles of atomic displacements, of the two primary IR-active phonons excited by the THz pump ($\omega_{a}=3.80$ and $\omega_{b}=4.60$ THz). We performed DFT calculations of the magnetic interactions in the presence of phonon oscillations for each phonon separately, as shown in Figs. S6(c-h), and with both phonons superposed, as we show in the time series illustrated in Fig. 5(d) of the main text. Considering first the individual phonons, we observe that the intradimer interactions, $J_{1}(q_{a,b})$ and $J_{2}(q_{a,b})$, have a largely quadratic dependence on $q_{a,b}$ for both phonons [Figs. S6(c-d)], suggesting that out-of-plane O atomic motions cause the predominant effects on these parameters. By contrast, most of the interdimer interactions, $J_{i}^{\prime}(q_{a,b})$, show strong linear as well as quadratic contributions [Figs. S6(e-f)] that depend both on the interaction pathway in question and on the combination of in- and out-of-plane atomic motions. It is clear that all four difference parameters, $\Delta J_{12}^{\prime}(q_{a,b})$, $\Delta J_{34}^{\prime}(q_{a,b})$, $\Delta J_{56}^{\prime}(q_{a,b})$, and $\Delta J_{78}^{\prime}(q_{a,b})$, have strong linear contributions [Figs. S6(g-h)] that allow an efficient driving of two- triplon creation processes by the IR-active phonon oscillations (Sec. IV of the main text). For the purposes of creating nonlinear magnetophononic phenomena in SrCu2(BO3)2, the strongest modulation of the $\Delta J_{i,i+1}^{\prime}(q_{a,b})$ interactions is produced by the 4.60 THz phonon [Fig. S6(b)]. This is due largely to the fact that its maximum THz-induced phonon displacement, $\delta_{b}=0.17$ Å (Sec. IIB of the main text), is significantly greater than that of the other phonons (at 3.80 THz we estimated the displacement $\delta_{a}=0.04$ Å). These maximum displacements are included in the calculations shown in Fig. 5 of the main text, where we have computed the phonon-induced modulation of the magnetic interaction parameters as a time series based on the pulse durations of our experiment. For this calculation, we computed 285 points covering 10 ps with a time resolution of 0.035 ps, which accessed both the high- and low-frequency regimes to a sufficient degree that we obtained well-resolved results for all features, in particular the TBS peak. Finally, we comment that an average over all our computed values of $J$ and $J^{\prime}$ indicates that the averaged coupling ratio, $\bar{\alpha}(q_{a,b})=\bar{J}^{\prime}/\bar{J}$, increases as a function of $q_{a,b}$ for both phonon modes. This leads to the results shown in Fig. 5(g) of the main text and to the possibility of driving the static QPT of the spin system into the plaquette phase by using the ultrafast driving of IR phonons to increase the time-averaged coupling ratio.
16k
arxiv_papers
2101.01195
# The rapid transition from star-formation to AGN dominated rest-frame UV light at $\mathbf{z\simeq 4}$ R. A. A. Bowler,1 N. J. Adams1, M. J. Jarvis1,2 B. Häußler3 1Department of Astrophysics, University of Oxford, The Denys Wilkinson Building, Keble Road, Oxford, OX1 3RH 2Department of Physics, University of the Western Cape, Bellville 7535, South Africa 3European Southern Observatory, Alonso de Cordova 3107, Vitacura, Santiago, Chile E-mail: [email protected] ###### Abstract With the advent of deep optical-to-near-infrared extragalactic imaging on the degree scale, samples of high-redshift sources are being selected that contain both bright star-forming (SF) galaxies and faint active galactic nuclei (AGN). In this study we investigate the transition between SF and AGN-dominated systems at $z\simeq 4$ in the rest-frame UV. We find a rapid transition to AGN-dominated sources bright-ward of $M_{\rm UV}\simeq-23.2$. The effect is observed in the rest-frame UV morphology and size-luminosity relation, where extended clumpy systems become point-source dominated, and also in the available spectra for the sample. These results allow us to derive the rest- frame UV luminosity function for the SF and AGN-dominated sub-samples. We find the SF-dominated LF is best fit with a double-power law, with a lensed Schechter function being unable to explain the existence of extremely luminous SF galaxies at $M_{\rm UV}\simeq-23.5$. If we identify AGN-dominated sources according to a point-source morphology criterion we recover the relatively flat faint-end slope of the AGN LF determined in previous studies. If we instead separate the LF according to the current spectroscopic AGN fraction, we find a steeper faint-end slope of $\alpha=-1.83\pm 0.11$. Using a simple model to predict the rest-frame AGN LF from the $z=4$ galaxy LF we find that the increasing impact of host galaxy light on the measured morphology of faint AGN can explain our observations. ###### keywords: galaxies: evolution – galaxies: formation – galaxies: high-redshift ††pubyear: 2020††pagerange: The rapid transition from star-formation to AGN dominated rest-frame UV light at $\mathbf{z\simeq 4}$–B ## 1 Introduction How supermassive black holes and their host galaxies co-evolve over cosmic time poses many fundamental questions within Astrophysics. The detection of luminous quasars at very high redshift (e.g. Fan et al., 2003; Willott et al., 2010a; Mortlock et al., 2011; Bañados et al., 2016, 2018; Yang et al., 2020) demonstrates that active black holes are present less than a Gyr after the Big Bang. Within the same epoch, the star-forming (SF) galaxy population is known to be building-up rapidly from measurements of the evolving rest-frame UV luminosity function (LF; e.g. Bouwens et al., 2015; Finkelstein et al., 2015; Bowler et al., 2020; Ono et al., 2018). Until recently, the populations of quasars and SF galaxies at redshifts $z=4-8$ have typically been treated as separate, due primarily to the disparate luminosity space occupied by the current samples. This is despite the majority of galaxies and quasars at very high-redshifts being selected based on the same spectral feature in optical/NIR survey data; the Lyman-continuum and/or the Lyman-$\alpha$ break. The strong Lyman-break in the spectral energy distribution (SED), which is redshifted into the optical filters at $z\gtrsim 3$ and near-IR at $z\gtrsim 7$, has allowed large samples of UV-bright galaxies and AGN111In this work we use the more inclusive term AGN rather than quasar throughout. to be selected efficiently. In the last decade, the advent of intermediate surveys that probe areas up to a few hundred square degrees on the sky has led to the first samples that bridge both faint AGN as well as bright galaxies, filling in a previously unachievable parameter space in volume and luminosity (Matute et al., 2013; Kashikawa et al., 2015; Matsuoka et al., 2018b; Stevans et al., 2018; Ono et al., 2018; Adams et al., 2020). The properties of these intermediate luminosity sources are important for several reasons. Firstly, the existence of very UV bright, highly star-forming galaxies can challenge models of feedback and dust obscuration via the inferred steepness of the bright-end of the Lyman-break galaxy (LBG) UV LF (e.g. Bower et al., 2012; Gonzalez-Perez et al., 2013; Bowler et al., 2014; Dayal et al., 2014; Clay et al., 2015). Furthermore, the uncertainty in the number of the brightest galaxies in combination with ‘contamination’ of these samples with faint-AGN or interloper populations can confuse the interpretation of the shape and evolution of the galaxy LF (e.g. Bowler et al., 2012; Bian et al., 2013). Secondly, the determination of the faint-end slope of the AGN LF is crucial for understanding if these sources played any significant role in reionizing the Universe at $z\gtrsim 7$ (e.g. as advocated by Giallongo et al., 2015, 2019, see discussion in Parsa et al., 2018). Thirdly, samples of sources in which the AGN and stellar component both contribute measurably to the observed light give an insight into how and when black holes become intricately linked to their host galaxy (e.g. via measurements of the black-hole to bulge/stellar mass relation at very high-redshift; Willott et al., 2010b; Venemans et al., 2017). While the rest-frame UV $z\simeq 4$ AGN LF was first measured several decades ago (e.g. Warren et al., 1994; Richards et al., 2006; Masters et al., 2012; Ikeda et al., 2012), recent surveys have been able to select larger samples over a wider luminosity range, thus providing greater precision. In particular, there have been several successful campaigns to identify fainter sources at $z\geq 4$ with surveys such as the Subaru High-$z$ Exploration of Low-Luminosity Quasars (SHELLQs: Matsuoka et al., 2018a), the Infrared Medium Survey (IMS: Kim et al., 2019) and the Hyper-SuprimeCam Stragetic Survey Program (HSC-SSP; Akiyama et al., 2018). These studies have been able to constrain the faint-end222The AGN LF is typically parameterised as a double- power law (DPL) of the form $\phi\propto\phi^{*}\,/((L/L^{*})^{\alpha}+(L/L^{*})^{\beta})$. This functional form includes four free parameters, a bright and faint-end slope ($\beta$ and $\alpha$), a characteristic luminosity ($L^{*}$) and normalisation $\phi^{*}$. For fitting the rest-frame UV LF of LBGs a Schechter function is commonly assumed of the form $\phi\propto\phi^{*}\,(L/L^{*})^{\alpha}\,e^{-L/L^{*}}$. For the Schechter function the bright-end slope is replaced by an exponential decline bright- ward of $L^{*}$. of the AGN LF for the first time at $z\gtrsim 4$, however there remain large discrepancies in the derived faint-end slope which ranges from $\alpha\simeq-1.3$ (Matsuoka et al., 2018b; Akiyama et al., 2018) to as steep as $\alpha\simeq-2$ (McGreer et al., 2018; Giallongo et al., 2019; Shin et al., 2020). A key challenge in the robust determination of the number density of the lowest luminosity AGN is that faint-ward of a certain absolute UV magnitude ($M_{\rm UV}\simeq-23$; Stevans et al., 2018; Ono et al., 2018; Adams et al., 2020), LBGs become overwhelmingly more numerous. In response, AGN selection methodologies have typically included a condition that the source must be unresolved in imaging data, as expected for a source dominated by the central AGN. Despite this, the very faintest sources targeted by these studies have shown spectra that are typical of Lyman-break galaxies (e.g. Matsuoka et al., 2018b; Kashikawa et al., 2015). Furthermore, as studies probe fainter AGN in the rest-frame UV, it is unclear how the light from star- formation might impact the morphology (e.g. Gavignaud et al., 2006) and hence cause current selection procedures based on compactness to become incomplete. Thus a more detailed analysis of the properties of faint-AGN and bright galaxies is required. In Adams et al. (2020) we selected a sample of LBGs and AGN at $z\simeq 4$ from the COSMOS and XMM-LSS deep extragalactic fields using a photometric redshift analysis based on the ground-based optical to NIR photometry. The advantages of this sample over previous studies are i) we do not impose any condition on source size or morphology and hence we are complete to both point-sources and extended galaxies, ii) we exploit the NIR data which results in a very clean selection of $z\simeq 4$ sources and iii) we have used two of the most widely studied deep fields where there is wealth of deep multi- wavelength data and spectroscopy available. Here we utilise this sample to investigate the properties of objects within the ‘transition’ regime between bright-AGN and the typical galaxy population at this redshift. We do this by looking at the morphology and size of the sources using both the available ground-based and wide-area _Hubble Space Telescope_ (_HST_) mosaics in COSMOS. In addition we have compiled publicly available spectra for the sample and use this to further classify sources. The structure of the paper is as follows. In Section 2 we describe the variety of datasets we use, and in Section 3 we describe the size measurements and the results from the available archival spectra of the sample. In Section 4 we derive the AGN fraction from our data and estimate the separated SF and AGN-dominated LFs. We discuss our results in 5 and present a simple empirical model of the AGN LF which we use to interpret our results in Section 6. We end with conclusions in 7. Throughout this work we present magnitudes in the AB system (Oke, 1974; Oke & Gunn, 1983). The standard concordance cosmology is assumed, with $H_{0}=70\,{\rm km}\,{\rm s}^{-1}\,{\rm Mpc}^{-1}$, $\Omega_{\rm m}=0.3$ and $\Omega_{\Lambda}=0.7$. At $z=[3.5,4.0,4.5]$ this cosmology implies that one arcsec corresponds to physical distances of $[7.3,7.0,6.6]\,{\rm kpc}$. ## 2 Sample selection and data The sample of $z\simeq 4$ galaxies and AGN we utilize in this work was selected in the COSMOS and XMM-LSS deep extragalactic fields. The selection was based on a photometric redshift fitting of the optical to NIR bands (u-band to $K_{s}$) from the available ground-based data. In this paper we further include _HST_ ACS imaging and spectroscopic observations available in the public domain. ### 2.1 The sample The full sample of $z\simeq 4$ sources from Adams et al. (2020) consisted of 20064 (38722) sources in the COSMOS (XMM-LSS) fields bright-ward of the 50 percent completeness limit of $M_{\rm UV}\simeq-20$. To be included in the sample the object must have a best-fitting photometric redshift in the range $3.5<z<4.5$ with either a galaxy or AGN SED. Stars were removed using a relative $\chi^{2}$ cut, such that the galaxy or AGN template must have a better fit than the stellar model. In this work we refined this sample using the most up-to-date photometry in the fields. We matched the original catalogue with a new $I$-band selected catalogue (created in an identical fashion) that included the deeper HSC DR2 data, and the UltraVISTA DR4 imaging. The matching process revealed a small number of artefacts that were removed in the newer HSC release. We also required the objects to satisfy the selection criterion described above when run on the new deeper photometry, and be brighter than $15\sigma$ in the ground-based $I$-band to ensure a robust morphology analysis. As a result of these steps we were left with a sample of 15126 (25592) sources in COSMOS (XMM-LSS). For the morphology analysis we required the sources to be covered by the _HST_ /ACS mosaic in the COSMOS field. The sub-sample covered by this mosaic contained 13848 sources. ### 2.2 Imaging data Within the COSMOS field we used the _HST_ /Advanced Camera for Surveys (ACS) mosaic in the F814W filter (hereafter $I_{814}$: Koekemoer et al., 2007; Scoville et al., 2007; Massey et al., 2010). We obtained $10\times 10\,{\rm arcsec}^{2}$ cut- outs333https://irsa.ipac.caltech.edu/data/COSMOS/index_cutouts.html from this mosaic for each source in our COSMOS sample where the ACS data existed (corresponding to 96 percent of the sample). The ACS data has a pixel scale of $0.03\,{\rm arcsec}/{\rm pix}$, and a typical point-source full-width-at-half- maximum (FWHM) of less than 0.1 arcsec. The COSMOS field is uniformly covered to single orbit depth, leading to a $5\sigma$ depth of 27.0 (0.6 arcsecond diameter aperture). Both the COSMOS and XMM-LSS fields contain a wealth of data in the optical and NIR bands. In this study we measure the rest-frame UV size from the HSC $I$ and CFHT $i$-bands. This allowed us to identify any potential systematics in the size measurement from using data of different depth and seeing. The HSC $I$-band is available across both fields, with varying depth, while the CFHT $i$-band is uniform in depth but is available for only a 1$\,{\rm deg}^{2}$ subsection of each field (Bowler et al., 2020). The images have a pixel scale of $0.15$ and $0.2$ in COSMOS and XMM-LSS respectively. The seeing in these bands was approximately 0.65 arcsec. ### 2.3 Publicly available spectroscopy Figure 1: The absolute UV magnitude distribution of our sample of $z\simeq 4$ sources selected initially in Adams et al. (2020). The samples in the COSMOS and XMM-LSS fields are shown in the upper and lower plots respectively. The total sample of galaxies and AGN with best-fit photometric redshifts in the range $3.5<z<4.5$ are shown as the grey histogram. The spectroscopically confirmed sources are shown as the blue shaded histogram. In hatched blue we have highlighted the spectroscopic redshifts from the deep galaxy surveys of ALPINE and VUDS in COSMOS and VANDELS in XMM-LSS. The spectroscopic redshifts at brighter magnitudes are typically from magnitude limited surveys (e.g. SDSS, zCOSMOS; see Table 2). We endeavoured to extract the publicly available spectroscopy for the sample. In both fields we initially matched to the compilation of spectroscopic redshifts created by the HSC team444https://hsc- release.mtk.nao.ac.jp/doc/index.php/dr1_specz/ Note that in this compilation, when a source had multiple redshifts from different surveys these were averaged. This can result in a stated redshift at $z<3$ when the source is securely at $z>3$. We corrected for this on a case-by-case basis. In addition we removed redshifts from 3D-HST as these are not purely spectroscopic, particularly at higher redshifts where there are few spectral features in the rest-frame optical. , which we supplemented with additional catalogues from the VANDELS (Pentericci et al., 2018; McLure et al., 2018) in XMM-LSS, and the ALMA Large Program to INvestigate (ALPINE; Fèvre et al., 2019) and the Boutsia et al. (2018) sample of $z\simeq 4$ AGN in COSMOS. In Fig. 1 we show the spectroscopically confirmed sources in comparison to the full sample as a function of absolute UV magnitude. In total we found a total of 63 and 236 high-redshift sources with secure spectroscopic flags in COSMOS and XMM-LSS respectively (76 and 270 with all flags)555Secure flags were typically 3 or 4, including flags that may have been modified for the presence of AGN (e.g. a flag of 14 in VVDS denotes a secure AGN). As part of this process we identified and removed 4 (12) low-redshift interlopers in COSMOS (XMM-LSS). In COSMOS, 100 percent of the sources in our sample at $M_{\rm UV}<-23.5$ are confirmed spectroscopically, partially due to the campaign of Boutsia et al. (2018). In XMM, we find a lower percentage of 42 percent in the same magnitude range. At the bright-end of our sample the spectroscopic redshifts come primarily from magnitude limited surveys including the Sloan Digital Sky Survey (SDSS; Eisenstein et al., 2011; Richards et al., 2002), zCOSMOS (Lilly et al., 2007), VIMOS VLT Deep Survey (VVDS; Le Fèvre et al., 2015) and Primus (Coil et al., 2011). At the faint-end of our survey the redshifts were obtained as specific follow-up for high-redshift galaxies. The ALPINE sample includes sources from the VIMOS Ultra Deep Survey (VUDS; Le Fèvre et al., 2015) and Deep Imaging Multi-Object Spectrograph (DEIMOS; Hasinger et al., 2018) follow-up of high-redshift sources that are spread over the COSMOS field. The VANDELS survey on the other hand, was limited to a smaller region of the field that overlaps with the _HST_ Cosmic Assembly NIR Deep Extragalactic Legacy Survey (CANDELS; Grogin et al., 2011; Koekemoer et al., 2011) where extremely deep spectroscopic integrations were performed. Thus the VANDELS sources extend to fainter magnitudes than in ALPINE. At the faint-end of the survey $\lesssim 2$ percent of our sample have been confirmed as part of these deep spectroscopic surveys. While we cross-matched our sample to all available spectroscopic redshifts, we were only able to obtain a subset of reduced spectra depending on the survey. We extracted all of the publicly available spectra from SDSS, zCOSMOS, VVDS and VANDELS for further analysis. ## 3 Results Armed with our sample of $z\simeq 4$ sources in the COSMOS and XMM-LSS fields, we proceeded to measure their sizes and spectroscopic properties. As we are primarily concerned with the objects in the ‘transition’ regime where AGN and LBGs have similar number densities, this analysis focuses on the results at $M_{\rm UV}\lesssim-22$. At these bright magnitudes we have a larger proportion of spectroscopic follow-up from magnitude limited surveys and we are able to identify the source morphology at high S/N in the high-resolution _HST_ data (e.g. even if the source fragments into several clumps the components are detected individually at $>5\,\sigma$). Figure 2: Postage-stamp images in the _HSC_ /ACS $I_{814}$ band of the brightest 30 sources in our COSMOS $z\simeq 4$ sample. The sources are presented in order of $M_{\rm UV}$, with the brightest sources in the top left, spanning the range $-24.18<M_{\rm UV}<-22.67$. The stamps are $2\,{\rm arcsec}$ on a side (corresponding to $\sim 14\,{\rm kpc}$ at $z=4$), in the standard orientation of North to the top, and East to the left. The images have been scaled by surface brightness, from $2\sigma$ (approximately $24{\rm mag}/{\rm arcsec}$) to the peak. Contours have been added at intervals of $1{\rm mag}$ starting at the peak, to highlight the central compactness of the sources. The ID number is shown in the bottom left and the $M_{\rm UV}$ is shown in the upper right. Sources that have been spectroscopically confirmed are labelled with an astericks in the bottom right corner. Note that all of the spectroscopically confirmed sources in this figure show strong quasar features in the spectra. ### 3.1 Visual morphology We first visually inspected the $z\simeq 4$ sources that had high-resolution imaging from the COSMOS _HST_ /ACS $I_{814}$ mosaic. In Fig. 2 we show postage-stamp images of the brightest 30 sources in our sample. The brightest eight sources have been spectroscopically confirmed as quasars (see Table 2) and as expected, these sources appear compact. As we go fainter in this sub- sample there is a dramatic change in the visual morphology, with the appearance of extended, clumpy, sources. For example objects ID793151, ID746876 and ID168777 are clearly resolved, with multiple-components that extend $>0.5\,{\rm arcsec}$ ($>3.5\,{\rm kpc}$) from the centroid. This is as expected from the galaxy size-luminosity relation and extensive studies of similarly luminous sources at $z\simeq 3$ (e.g. Law et al., 2012; Lotz et al., 2006) and $z>5$ (e.g. Jiang et al., 2013; Bowler et al., 2017). In the sources that are confirmed as AGN from their spectra, there is some evidence for weak extended emission (e.g. ID702265), which could be arising from the host galaxy. Furthermore, one of the sources (ID153468) that is a confirmed quasar from the available rest-frame UV spectrum in Boutsia et al. (2018), appears to be compact but resolved (which is also confirmed by quantitative measure of the size, see Section 4.1). This demonstrates that the host galaxy light is contributing significantly to the rest-frame UV light from this source. We estimated the contribution from the host galaxy by simultaneously fitting a PSF and Sérsic profile (fixed to $n=1.0$; Law et al., 2012) to the image using GALFIT. The best-fit from this crude estimate was an equal contribution from the AGN and SF in this object. For the other visual point-sources in the data, we found that a PSF alone was adequate to fit the imaging. If we assume that the extended sources are SF-dominated, whilst the point-sources are AGN dominated (which is likely given the high rate of AGN spectra found for these sources), then we see a rapid transition in the individual galaxy morphology that indicates a transition from AGN to SF dominated systems at $M_{\rm UV}\simeq-23$. A transition at this magnitude is in good agreement with that predicted by the simultaneous AGN and LBG LF fitting of Adams et al. (2020) and we compare our results directly to this study in our analysis of the AGN fraction in Section 4.1. ### 3.2 Size-luminosity relation Due to the clumpy nature of the sources in the _HST_ $I_{814}$ data, the sizes of these sources can be substantially biased depending on the chosen measurement technique. When running Source Extractor (SE; Bertin & Arnouts, 1996) on these images for example, we found that the majority of the SF- dominated sources were de-blended into several component. This results in a severe underestimate of the individual sizes of the brightest galaxies in the sample. To get around this issue we proceeded to make high signal-to-noise stacks of the data over a range in $M_{\rm UV}$. #### 3.2.1 Stacking procedure We created stacks in both the high-resolution _HST_ imaging and the ground- based HSC and CFHT $i$-band data. In both cases masks were formed from the SEGMENTATION images created with SE. For the ground-based data we masked all sources that were not associated with the central source. In the _HST_ /ACS data we recombined de-blended components by retaining all objects that were within a radius of 0.8 arcsec from the central coordinate defined by the ground-based centroid. Due to their close proximity, and the extended emission connecting clumps in many cases, we are confident these components are at the same redshift (see Fig. 2 and discussion in Bowler et al., 2017). If the separate components were galaxies at lower redshifts, we would expect these interloper sources to effect the ground-based optical to NIR photometry and thus be removed as interlopers by our SED fitting process. With our recombined source we then determined the new centroid of the detected pixels in this extended object as the barycentre or first order moment (as used in SE). With these masked and centred images we proceeded to stack the images using both an average and median stack for comparison. The size measurement we used was the half-light radius from SE. In the following plots we present the results derived from the median stack using the barycentre centroid, however we comment on any different results found using the other methods. We obtained errors on our stacked size measurements using bootstrap resampling. At the bright-end where there are very few sources in each bin this essentially measures the spread of the individual sizes in that bin. #### 3.2.2 Results In Fig. 3 we show the observed sizes of the stack galaxy images, uncorrected for the PSF, as a function of $M_{\rm UV}$ for the ground-based and _HST_ data. At the faint-end of our sample we find that the galaxy stacks are resolved even in the ground-based data, with measured half-light radii in the range $r_{\rm 1/2}\simeq 0.55$–$0.6\,{\rm arcsec}$ as compared to $r_{\rm 1/2}\simeq 0.4$–$0.45\,{\rm arcsec}$ for the PSF. As we move to brighter galaxy stacks there is a gentle increase in size until $M_{\rm UV}\simeq-22.5$, where we see a drop to smaller sizes that are consistent with being unresolved. The results from the HSC $I$-band and the CFHT $i$-band are consistent within the errors for both fields. In COSMOS, where we also have the higher resolution imaging _HST_ $I_{814}$ data, the drop in size observed in the ground-based and _HST_ data occurs at a consistent $M_{\rm UV}$. The wider area covered by XMM-LSS can also explain the shallower decline in size bright-ward of $M_{\rm UV}=-23$ as compared to the COSMOS result, because this larger volume will result in rarer bright galaxies being detected. Indeed, we identify a spectroscopically confirmed galaxy at $M_{\rm UV}=-23.6$ (see Section 3.3). In both COSMOS and XMM-LSS, the brightest sources all appeared as point-source in the ground-based data, however we found that in some cases the measured $r_{1/2}$ was larger than expected due to blending with foreground galaxies. Figure 3: The observed half-light radius of our sample of $z\simeq 4$ sources, uncorrected for the PSF, in the COSMOS and XMM-LSS fields (upper and lower plot respectively). In each plot the dark blue diamonds show the HSC I-band measurement, and the light blue squares show the CFHT i-band result. In the COSMOS plot we show the measured $r_{\rm 1/2}$ from the HSC $I_{814}$-band as the black circles. The blue horizontal band shows the $r_{\rm 1/2}$ measured for the ground-based PSF. The corresponding band for the ACS data is shown as the black/grey line. In all datasets we see a drop off in measured size at $M_{\rm UV}<-22.5$, although this is less pronounced in XMM-LSS. In order to measure the size-luminosity relation we corrected for the effect of the PSF by subtracting in quadrature the $r_{1/2}$ measured for stars in the imaging. This is an approximate correction for the PSF, however we use it for comparison with previous studies. The size-luminosity relation was measured from the _HST_ /ACS data only, as these data provide the most robust size measurements due to the smaller effect of the PSF (which has $r_{1/2}=0.0725\,{\rm arcsec}$). We then converted the $r_{1/2}$ into a physical distance by assuming a redshift of $z=4.0$. The resulting size- luminosity relation derived from the _HST_ /ACS data is shown in Fig. 4. As expected from this effective rescaling of the observed sizes shown in Fig. 3, we see a clear drop in size at $M_{\rm UV}<-22.5$. To determine the size- luminosity relation from the sample we therefore fit to the points faint-ward of this magnitude, assuming the standard parameterisation of $r_{\rm 1/2}\propto L^{\beta}$ (e.g. Shen et al., 2003). We find a best fitting slope of $\beta=0.16\pm 0.03$, with a normalisation given by $R_{0}=1.45\pm 0.02$ at $M_{\rm UV}=-21.0$. If we use the SE single-component centroid (e.g. prior to recombining de-blended components), we derive an identical slope, but a lower normalisation of $R_{0}=1.36\pm 0.02$. We find no difference in results when using an average stack instead of the median presented here. Our derived size- luminosity relation is consistent with that determined by Huang et al. (2013), who found $\beta=0.22\pm 0.06$, and Curtis-Lake et al., 2016 who found $\beta=0.06\pm 0.11$. We find an offset compared to the relation derived in Curtis-Lake et al. (2016), which we attribute to the different measurement of size used by that study. By extrapolating our fitted size-luminosity relation to brighter magnitudes, it is evident that there is a dramatic drop in the observed sizes of $z\simeq 4$ sources. We attribute this drop to the increasing contribution of point- sources bright-ward of $M_{\rm UV}=-22.5$, in agreement with the visual morphologies shown in Fig. 2. The transition occurs over almost one magnitude, being complete around $M_{\rm UV}=-23.25$ according to our ACS stacks. From the XMM-LSS results shown in Fig. 3, which cover a wider area than COSMOS and hence are likely to detect the presence of the rarest SF galaxies, there is evidence for extended sources up to $M_{\rm UV}\simeq-24$. Figure 4: The size-luminosity relation at $z\simeq 4$, derived from the subset of our sample that have high-resolution ACS $I_{814}$ coverage. Our best-fit relation to the points at $M_{\rm UV}>-22.5$ is shown as the solid black line, with the grey shading showing the $1\sigma$ confidence interval. The size- luminosity relation from Huang et al. (2013) and Curtis-Lake et al. (2016) at $z=4$ are shown as the orange dashed and blue dotted lines respectively. A clear deviation from the relation is observed at bright magnitudes, with the sources at $M_{\rm UV}<-23.2$ being consistent with being unresolved by _HST_ /ACS. ### 3.3 Rest-frame UV spectroscopy Figure 5: A compilation of rest-frame UV spectra of the brightest sources in the $z\simeq 4$ sample. The spectra have been shifted into the rest-frame according to the spectroscopic redshift provided by each survey, and are ordered by absolute UV magnitude with the brightest source at the top left. The raw data is shown as the coloured background and a box-car filtered spectrum in shown in black. Each spectrum (which was originally in units of ${\rm erg}/{\rm s}/$Å) was normalised to a peak flux of 1.0, and has been presented offset in the vertical direction for clarity. On the left of each spectrum is a label presenting the ID number and the absolute UV magnitude of the object. On the right of each spectrum is the name of the survey which obtained the data. We label common high-ionization emission lines with vertical dashed lines. Figure 6: The spectrum of the faintest AGN we have identified in our $z\simeq 4$ sample when cross-matched to publicly available spectra. The data is displayed as in Fig. 5. This source was found within the XMM-LSS field and has a spectrum from the VANDELS survey. It is the only source we find with AGN features faintward of $M_{\rm UV}=-22.4$. To inform further our classification of sources as SF or AGN-dominated in the ‘transition’ region observed in the size-luminosity relation, we retrieved the publicly available spectra available for the sample. While the brightest sources in our sample are confirmed from magnitude limited surveys (e.g. SDSS, zCOSMOS), there is a dearth of spectra in the range $-23.5<M_{\rm UV}<-22.5$, as is visible in Fig. 1 and shown in Table 2. Nevertheless, we compiled the publicly available spectra and present the results for all sources brighter than $M_{\rm UV}=-22$ in Fig. 5. We smoothed the spectra with a box-car filter of width $1000\,{\rm km}/{\rm s}$ in the rest-frame, to highlight the spectral features above the noise. The brightest seven sources show broad emission lines of NV $\lambda 1240$Å, SiIV $\lambda\lambda 1393,1402$ and CIV$\lambda\lambda 1548,1550$Å in addition to strong Lyman-$\alpha$ emission, all of which are clear signatures of unobscured AGN spectra. In addition to the typical AGN spectra, we identify source that show the appearance of SF- dominated light in the rest-frame UV. Faint-ward of $M_{\rm UV}=-24$ we see a majority of sources that show the appearance of SF-dominated light, with narrow Lyman-$\alpha$ emission and absorption lines. Of particular interest is ID1448401 which is the most luminous source ($M_{\rm UV}=-23.6$) in this sub- sample to show a SF-dominated spectrum. We discuss the implications for the discovery of this object for the LF in Section 4. Within the SF-dominated objects, we see a large variation in the observed spectra, with some showing strong Lyman-$\alpha$ and others showing a continuum break and no appreciable Lyman-$\alpha$ emission. The sources shown in Fig. 5 are particularly bright which makes it relatively straightforward to see the presence of SF or AGN-type features in the spectra. Even with this limited spectroscopic sub-sample, it is evident that there is a transition in the rest-frame UV spectra of $z\simeq 4$ sources between $-24.0<M_{\rm UV}<-22.0$. Fainter than $M_{\rm UV}=-22.4$ we find only one other spectrum that shows clear evidence of AGN signatures, both through a visual inspection of the smoothed data and through an analysis of the spectral flags provided by each survey. The spectrum of this faint AGN is shown in Fig 6. This source was observed as part of the VANDELS survey of the XMM-LSS field and has $z_{\rm spec}=3.9407$ and $M_{\rm UV}=-21.0$. It lies outside the region of _HST_ data from CANDELS, however it appears extended in the ground- based HSC I-band data, with a PSF uncorrected $r_{\rm 1/2}=0.77\,{\rm arcsec}$. In the spectrum there are again strong emission lines of high- ionization species including CIV and HeII. In comparison to the brighter AGN shown in Fig. 5 however, ID520330 shows stronger and considerably narrower emission lines. We measured the FWHM of the CIV doublet in all of the spectra shown, both directly from the smoothed data and through fitting a simple model of two Gaussians at the doublet wavelengths (constrained to have the same normalisation and standard-deviation). Both methods produced consistent results within the errors. The result of this analysis was that the brighter AGN in our sample show CIV $FWHM\simeq 2000$–$6000\,{\rm km}/{\rm s}$, as found for the general population of SDSS quasars for example (e.g. Vanden Berk et al., 2001). In contrast, the faint source ID520330 shows a significantly narrower width of $FWHM=1200\pm 100\,{\rm km}/{\rm s}$. Similarly, this faint source has the highest rest-frame equivalent width of CIV amongst the AGN spectra, showing $EW_{0}\simeq 150\pm 30$Å. This is to be compared with $EW_{0}\simeq 20$–$90$Å for the brighter sources. Emission lines of this width and strength are characteristic of obscured Type II AGN (e.g. Alexandroff et al., 2013), where only the narrow line region is observed. The fact that we see Type II signatures in the faintest source in the rest-frame UV is also to be expected, as the bright continuum from the AGN is obscured in this case. Thus for this source we are observing predominantly the host galaxy continuum, with the addition of AGN emission lines. ## 4 Separating the UV LF of AGN and LBGs As expected, the brightest sources in our $z\simeq 4$ sample appear to be AGN- dominated in the rest-frame UV, showing a point-source morphology in the _HST_ /ACS imaging and strong quasar features in the rest-frame UV spectra. Faint-ward of $M_{\rm UV}\simeq-22.5$ however, the sources become extended, with spectra that are dominated by the light from young stars. In the ‘transition’ regime between these AGN and SF-dominated objects, we find evidence for a mixture of these two classes in our sample. In this Section we use these observations to infer the rest-frame UV LF of the two components, with the assumption that the majority of the sources in our sample can be separated into either an AGN or SF-dominated category. The existence of a slightly extended source that shows the rest-frame UV spectrum of an AGN (see Section 3.1) demonstrates that this assumption will break down depending on how AGN are distributed in the underlying galaxy population. We discuss this issue further in Section 6. ### 4.1 AGN fraction To separate AGN and galaxies in our sample we define an AGN fraction ($f_{\rm AGN}$) as a function of absolute UV magnitude. We first define a quantitative measure of AGN-dominated sources from the $I_{814}$-band high-resolution images by assigning objects with a $r_{1/2}<0.1\,{\rm arcsec}$ as AGN. In addition, we determined a comparison $f_{\rm AGN}$ from the visual morphology of the brightest sources. Despite this being more subjective than a size cut we found very close agreement between these two measures. As a final check we also used GALFIT to fit the stacked images in each $M_{\rm UV}$ bin. We used a two component model consisting of a point-source and a Sérsic profile (with a fixed index of $n=1$). Reassuringly the AGN fraction of each stack, as defined by the ratio of the flux in the point-source compared to the total flux, agreed very well with the size cut. Hence we are confident that the derived AGN fraction from the source morphology does not depend significantly on the method used to derive it. Faint-ward of $M_{\rm UV}=-22$, the smaller mean sizes of galaxies coupled with the scatter in the galaxy size-luminosity relation makes it more challenging to separate compact galaxies from point- sources using a size criterion. Hence we do not present measurements of the morphology-based $f_{\rm AGN}$ for sources faintward of $M_{\rm UV}=-22$. We also defined an AGN fraction from archival spectra for a sub-set of our sample. Using the spectra presented in Fig. 5 we identified AGN-dominated sources according to the presence of strong emission lines of CIV, NV and HeII. Due to the small number of sources with spectra we used wider bins than for our morphology measurement. We determined the centre of each bin by taking the average source luminosity to negate any bias in the distribution of sources within that bin. Faintward of $-22.0$ we find only one source which has AGN signatures in the available spectroscopy sample. This source is identified as a Type II AGN (ID520330) in which the rest-frame UV continuum is dominated by the host-galaxy light and we therefore define this source as SF- dominated (as discussed in Section 6.3). Figure 7: The AGN fraction as a function of absolute UV magnitude at $z=4$, derived from morphology/size criteria (blue circles) and from spectroscopy (orange squares). We compare to previous estimates of the AGN fraction at $z=4$ from Ono et al. (2018) and the $z=2$–$3$ results from Sobral et al. (2018) as the open black squares and grey diamonds respectively. The lines show the predicted $f_{\rm AGN}$ from the simultaneous fitting of the LBG and AGN LF presented in Adams et al. (2020). The dashed and solid lines show the results assuming a Schechter and DPL form for the LBG LF respectively. The four images in the upper row show the result of stacking our sample in the four grey highlighted bins in $M_{\rm UV}$ shown on the plot. The stamps are $1.5\,{\rm arcsec}$ on a side, with contours at intervals of $1.0\,{\rm mag}$ from the peak. We present the derived $f_{\rm AGN}$ measurements in Fig. 7. Both the morphological and spectroscopic measurements show a sharp drop in the $f_{\rm AGN}$ between $-24\lesssim M_{\rm UV}\lesssim-22.5$, with around equal occurrence of AGN and LBGs at around $M_{\rm UV}\simeq-23.2$. This is also visually apparent in the stacked _HST_ /ACS images (top of Fig. 7), where at $M_{\rm UV}\simeq-23$ the stack is clearly extended (although with less flux in the wings) while at $M_{\rm UV}\simeq-23.8$ the image is consistent with being a point-source. Comparing to previous estimates of the $f_{\rm AGN}$ at $z\simeq 4$ we find good agreement with the spectroscopy measurements of Ono et al. (2018), who used predominantly archival redshifts with spectroscopic flags to determine the AGN fraction. The advantage of our method of AGN classification from the full spectrum is that we are not sensitive to differences between AGN classifications between spectroscopic surveys. We find a brighter transition magnitude than that of Sobral et al. (2018), who found a drop in the fraction of AGN at $M_{\rm UV}=-21.5$ at $z\simeq 2$–$3$. This study was based on the follow-up of strong Lyman-$\alpha$ emitters rather than LBGs, and hence it could be expected that this pre-selection for strong line- emitters would preferentially detect AGN at fainter magnitudes. We note however, that in Sobral et al. (2018) the AGN fraction at $M_{\rm UV}>-22.5$ is determined from sources predominantly from a detection of the NV line with low S/N ($\lesssim 3.0$), their classification as AGN is somewhat uncertain. At $-23<M_{\rm UV}<-22$ we see a slight difference in the derived AGN fraction from our morphological and spectroscopic measurements. From a morphology cut we measure $f_{\rm AGN}=0.06^{+0.03}_{-0.02}$ at $M_{\rm UV}=-22.5$ while with spectroscopic data we find $f_{\rm AGN}=0.25^{+0.17}_{-0.12}$. Although the errors are large, due predominantly to the small number statistics for the spectroscopic sub-sample, a difference between the $f_{\rm AGN}$ between a strict morphological selection and spectroscopic identification could be expected in this magnitude range. This is a consequence of the increasing importance of host galaxy light at fainter UV magnitudes, and we present a toy model that can explain these observations in Section 6. Alternatively the slight difference found could be due to bias in the spectroscopic measurement, as arguably the strong emission lines from AGN and the compactness of the emission could make them easier to identify in spectroscopic measurements. In the range $-23<M_{\rm UV}<-22$ we find no difference between the sizes of the spectroscopically confirmed sources and our full sample, suggesting that we are not biased to compact sources. In this magnitude range we find 10 sources with archival spectroscopy, four from the VANDELS survey and six from VVDS (of which eight with secure flags are shown in Fig. 5). While VVDS is a purely I-band magnitude limited survey, VANDELS selected against compact sources over 50 percent of the survey area that was covered by _HST_ imaging (McLure et al., 2018), and hence VANDELS should be biased _against_ AGN. If we measure the $f_{\rm AGN}$ in the VVDS and VANDELS survey separately, we find $f_{\rm AGN}=0.25^{+0.25}_{-0.15}$ for both surveys when secure flags are used, indicating that there is no clear bias within the limitations of small number statistics. Two of the VVDS spectra were not included in our initial AGN fraction calculation as they have poor quality flags. If we assume these objects are SF-dominated, under the assumption that AGN features are easier to identify, then we obtain a lower $f_{\rm AGN}=0.20^{+0.15}_{-0.10}$ in this magnitude range. This value is still higher than that derived from our morphology measurement, however larger spectroscopic samples are clearly required to determine if there is a real discrepancy between the $f_{\rm AGN}$ found from a strict morphological cut in contrast to a classification from spectroscopy. In Fig. 7 we also present the predicted AGN fraction from the simultaneous fitting of the combined AGN and LBG LF presented in Adams et al. (2020). Adams et al. (2020) assumed either a Schechter function or DPL form for the LBG LF in addition to a single power-law (PL) to model the faint-end of the AGN. Without any further information about the nature of the sources, both models produced a good fit for the observed UV LF over $-26<M_{\rm UV}<-20$ (see left panel of Fig. 8). In comparison to our derived $f_{\rm AGN}$ we see that the Schechter function form of the LBG LF predicts a steeper decline in the fraction of AGN at fainter magnitudes than a DPL model, due to the exponential drop-off at the bright-end of this parameterisation. Note that the position of this drop depends on the position of the ‘knee’ in the LF, which is strongly constrained by the number density of sources $M_{\rm UV}>-21$. Both our measurements of the AGN fraction deviate from this steeper Schechter prediction at $M_{\rm UV}\sim-24$, as do the results of Ono et al. (2018), suggesting that a DPL is the more appropriate function to describe the LBG LF at $z\simeq 4$. ### 4.2 The Luminosity Function In the previous section we compared the observed $f_{\rm AGN}$ to that expected from the fitting of the full AGN + LBG rest-frame UV LF. In this section we instead use the $f_{\rm AGN}$ derived in this study to separate the LF results of Adams et al. (2020) into AGN and SF-dominated subsamples. Because we could only classify a small fraction of the full sample using the morphology and spectroscopy data (e.g. because high-resolution imaging was only available in COSMOS, and the spectroscopy data only covers $\sim 1$ percent of the sample), we elected to apply the $f_{\rm AGN}$ to the data points from Adams et al. (2020) as opposed to recalculating the LF from a significantly smaller sample. We determined the separate LFs by using the AGN fractions derived from the morphology and spectroscopy results separately. To interpolate the $f_{\rm AGN}$ we fit a constrained model to our binned AGN fraction points shown in Fig. 7. The model consisted of two power-laws to approximate the overlap between the bright-end of the galaxy LF and the faint- end of the AGN LF without over-fitting the data. Due to the difference in the $f_{\rm AGN}$ derived from using the morphological and spectroscopic data, we find differences in the separate LFs for the AGN and SF dominated sources as shown in the two right-hand plots in Fig. 8. Figure 8: The rest-frame UV LF at $z\simeq 4$. The left-hand panel shows the full LF derived in Adams et al. (2020) as the open black circles. The solid (dotted) lines on this plot show the result of the simultaneous fitting presented in Adams et al. (2020) with a DPL (Schecher) assumed galaxy LF. The central and right-hand plots show a zoom-in of the transition region, where we have separated objects that have rest-frame UV light that is SF (blue diamonds) or AGN (red squares) dominated. In these plots we show the best-fit AGN power law as the red line. The best-fit Schechter or DPL function to the SF-dominated results are shown as the blue dotted and solid lines respectively. For the Schechter function the fit is constrained by points that are fainter than $M_{\rm UV}=-22$. The effect of the magnification bias on the Schechter function is shown as the blue shaded excess on this curve. In all three plots we show the AGN results from Akiyama et al. (2018) as the grey filled circles. We present the results of fitting the separated AGN and SF-dominated LFs with different parameterisations in Fig. 8. The best-fit parameters are presented in Table 1 in comparison to the LF parameters derived from the simultaneous fit of Adams et al. (2020). The AGN fractions derived from these fits in comparison to the results of Adams et al. (2020) are presented in Appendix B. We fit our separated AGN-dominated UV LFs using a single power-law, as we do not extend faint-ward of the apparent LF knee at $M_{\rm UV}\sim-26$. For the SF LF we fit using both a Schechter function or a DPL. If we focus first on the SF-dominated results, we find that the separation of sources using a morphology or spectroscopy criterion makes only a marginal difference to the derived LF. This is evident in the fitting results presented in Table 1, where the values are well within the $1\sigma$ errors in the different scenarios. We also find good consistency with the Schechter and DPL parameters from Adams et al. (2020), which is to be expected as the LBG fit is predominantly constrained by the data-points at $M_{\rm UV}>-22$. We checked that the impact of strong gravitational lensing on a Schechter function fit could not reproduce the number of bright sources by applying the methodology of Mason et al. (2015); Barone-Nugent et al. (2015). The excess that results from the lensing is shown in Fig. 8 and is too small to account for the number density we find at $M_{\rm UV}<-23$. In contrast to the SF-dominated LF, the faint-end of the AGN-dominated LF depends more significantly on the assumed $f_{\rm AGN}$. If we use a morphology criterion we find a shallow slope ($\alpha=-1.19\pm 0.05$) due to the rapid drop in point-source dominated sources faint-ward of $M_{\rm UV}\simeq-23$. In this case, we find close agreement with the results of Akiyama et al. (2018), who derived a faint-end slope of $\alpha=-1.30\pm 0.05$. This is to be expected given that Akiyama et al. (2018) identified AGN based on a compactness criterion in the ground-based HSC data. If instead we use our spectroscopic criterion in determining $f_{\rm AGN}$ we find a significantly steeper slope of the faint-end of the AGN LF of $\alpha=-1.85\pm 0.05$. In this case our data points start to diverge from the Akiyama et al. (2018) points, due to a higher proportion of AGN-dominated sources at faint magnitudes in this parameterisation (see Fig. 7). Adams et al. (2020) found $\alpha=-1.66^{+0.29}_{-0.58}$ in the fit of a DPL (LBG component) with a PL (AGN component). The large error on this value is a consequence of the degeneracy between the bright-end slope of the LBG LF and the AGN faint-end slope. Our measurement of the slope of the faint-end of the AGN LF is more constrained by the data-points at $M_{\rm UV}\gtrsim-24$, which allows us to find a best-fit that is shallower, but still consistent within $1.5\sigma$, from the Adams et al. (2020) result. Table 1: The luminosity function parameterisations for the separated SF and AGN results shown in Fig. 8. The upper part of the table shows the results when separating according to a morphological criterion, while the lower part shows the results when AGN are identified according to their spectra. The first column indicates the sub-sample that was fit to (SF-dominated or AGN-dominated). For the SF case, we show the results for a Schechter function and DPL fit in the first and second row. The fit to the AGN case was performed with a single power law, with the normalisation calculated at a fixed $M_{\rm UV}$ highlighted with an asterick. The second and third column denote the characteristic absolute magnitude and normalisation. The fourth column show the faint-end slope for the SF and AGN fits, and the final column shows the bright-end slope for the DPL fit. Type | $M^{*}$ | $\phi^{*}$ | $\alpha$ | $\beta$ ---|---|---|---|--- | $/{\rm mag}$ | $/{\rm mag}/{\rm Mpc}^{3}$ | | SF | $-21.00(0.10)$ | $1.36(0.24)\times 10^{-3}$ | $-1.75(0.13)$ | – SF | $-21.53(0.06)$ | $0.36(0.05)\times 10^{-3}$ | $-2.07(0.07)$ | $-5.15(0.10)$ AGN | -25.70* | $2.48(0.31)\times 10^{-7}$ | $-1.19(0.05)$ | – SF | $-20.97(0.09)$ | $1.44(0.25)\times 10^{-3}$ | $-1.72(0.13)$ | – SF | $-21.50(0.05)$ | $0.37(0.04)\times 10^{-3}$ | $-2.05(0.06)$ | $-5.15(0.09)$ AGN | -25.70* | $0.77(0.21)\times 10^{-7}$ | $-1.83(0.11)$ | – Adams et al. (2020) Sch. | $-20.89^{+0.12}_{-0.10}$ | $1.62^{+0.33}_{-0.27}\times 10^{-3}$ | $-1.66^{+0.13}_{-0.08}$ | – +PL | -25.70* | $0.71^{+0.44}_{-0.39}\times 10^{-7}$ | $-2.09^{+0.32}_{-0.38}$ | – DPL | $-21.37^{+0.08}_{-0.11}$ | $0.50^{+0.10}_{-0.06}\times 10^{-3}$ | $-1.92^{+0.07}_{-0.04}$ | $-4.92^{+0.29}_{-0.25}$ +PL | -25.70* | $0.85^{+0.81}_{-0.34}\times 10^{-7}$ | $-1.66^{+0.29}_{-0.58}$ | – ## 5 Discussion In this work we have investigated the transition in the properties of $z\simeq 4$ sources at $M_{\rm UV}\simeq-23$, where the number densities of faint-AGN and bright-galaxies converge. From our imaging data we observe a change in the source morphology, through a sharp drop in the average size of sources in the size-luminosity relation that is also seen in the individual source morphology. We also see a change in the features present in the available rest-frame UV spectra for the sample. The absolute UV magnitude at which this transition occurs corresponds to the point of rapid decline in the bright-end of the galaxy LF. Furthermore, the form of the increase in the AGN fraction to brighter magnitudes depends on the shape of the galaxy LF bright-ward of the knee in the function ($M_{\rm UV}\simeq-21$; see Table 1). There has been an ongoing discussion on the shape of the rest-frame UV at high-redshifts. While the UV LF at $z\gtrsim 4$ has typically been fitted by a Schechter function (e.g. McLure et al., 2013; Finkelstein et al., 2015; Bouwens et al., 2015), recent results have demonstrated an excess of highly luminous galaxies in relation to the Schechter function predictions (Bowler et al., 2014; Ono et al., 2018; Bowler et al., 2020). In our derived AGN fraction, and in the corresponding SF-dominated LFs, we have found evidence for a shallower decline in the number density of the brightest SF galaxies at $z\simeq 4$. Most strikingly, the discovery of an extremely bright source at $M_{\rm UV}=-23.6$ with no evidence for AGN spectral features (ID1448401 in Fig. 5) supports a $f_{\rm AGN}\simeq 0.8$ (range within the errors of $0.38$–$0.95$) at this magnitude, which leads to a number density of sources well in excess of the Schechter function prediction. This finding is potentially in conflict with studies that have found support for a Schechter function form at $z\simeq 3$–$4$ (van der Burg et al., 2010; Hathi et al., 2010; Bian et al., 2013; Parsa et al., 2016), however it is only recently that the datasets available at $z\simeq 4$ have had sufficient volume to adequately constrain the number density of the rarest galaxies/faint-AGN. If we fit our SF-dominated LF with a DPL, the results are in good agreement with the evolution in the DPL parameters derived in Bowler et al. (2015); Bowler et al. (2020), who found a steady steepening of the bright-end from $z\simeq 9$ to $z\simeq 5$ consistent with the increasing impact of dust. If this steepening is due to dust obscuration in the most highly star-forming galaxies, then the effects of this dust should be observable both in the colours of bright LBGs and directly via reprocessed emission in the far-infrared. Interestingly, the brightest spectroscopically confirmed LBG in our sample (ID1448401) shows a very blue rest-frame UV continuum (rest-frame UV slope $F_{\lambda}\propto\lambda^{\beta}$; $\beta\simeq-2$). From the observed colour-magnitude relation at this redshift, this source would be expected to show a redder slope with $\beta\simeq-1.4$ (Lee et al., 2011; Bouwens et al., 2014). Rogers et al. (2014) demonstrated that at $z\simeq 5$ there is an increased scatter in the rest-frame UV slopes of LBGs to brighter magnitudes, and thus it is plausible that this LBG is a rare example of a highly star- forming galaxy ($SFR\simeq 80\,{\rm M}_{\odot}/{\rm yr}$; Madau et al., 1998) with little dust attenuation at this redshift. Thus while overall the increased production and attenuation of dust in the most highly SF galaxies from $z\simeq 9$–$4$ could cause a steepening of the bright-end slope of the rest-frame UV LF, this does not preclude the existence of galaxies with high- SFRs and a lack of dust obscuration within this epoch. ### 5.1 The faint-end of the AGN UV LF There has been a renewed interest in recent years on the slope of the faint- end of the rest-frame UV LF of AGN. This was motivated by the claimed detection of high-redshift X-ray sources by Giallongo et al. (2015), who used their data to suggest that UV-faint AGN could contribute significantly to the process of reionization at $z>6$. Such an analysis relies on the integral of an extrapolated rest-frame UV LF to determine the total number of ionizing photons that can be produced by AGN at very high-redshifts. The subsequent studies of Boutsia et al. (2018) and Giallongo et al. (2019) have further claimed an excess in sources at the faint-end of the $z\simeq 4$. While several works have called these results into question (e.g. Parsa et al., 2018; McGreer et al., 2018; Cowie et al., 2020) at $z>4$, the determination of an accurate slope of the faint-end of the AGN LF remains of interest. Our observations demonstrate that the derived slope of the $z\simeq 4$ AGN LF depends strongly on the selection method. We can reproduce the flatter slope found in Akiyama et al. (2018) by using a criterion on morphology to separate AGN-dominated sources from the full $z\simeq 4$ LF. If instead we use a spectroscopic determination of $f_{\rm AGN}$ to estimate the AGN LF, we derive a steeper faint-end slope ($\alpha\sim-1.8$). The rest-frame UV spectroscopic features of AGN are strong and broad emission lines (e.g. Fig. 5), while LBGs are expected to show absorption features or potentially weak nebular emission lines (Stark et al., 2014; Shapley et al., 2003; Steidel et al., 2016). Thus we expect any classification of a source as an AGN based on the rest-frame UV spectrum to be sensitive to not only the brightest AGN-dominated objects, but also objects in which the light from SF is significant. Instead, AGN selections that include a point-like morphology selection will exclude objects where the host galaxy UV light causes the source to be rejected as too extended. Given the wide range in imaging depths and compactness criterion used in different AGN selections, it is challenging to understand the incompleteness of previous studies due to this effect. It is clear however, from our study and other works (e.g. Matsuoka et al., 2018b), that faint-ward of $M_{\rm UV}\simeq-24$ it is necessary to account for both AGN and SF- dominated sources. ### 5.2 Evolution of the AGN to SF transition into the EoR Both the AGN and LBG rest-frame UV LFs are known to evolve rapidly at $z\gtrsim 4$. In Bowler et al. (2020) we found evidence for a flattening of the bright-end slope with increasing redshift in the range $z=5$–$10$, with a corresponding evolution in $M^{*}$ according to $\Delta M^{*}/\Delta z\simeq-0.5$. This was interpreted as a result of decreased dust obscuration and mass quenching within the Epoch of Reionization (EoR). Such a change in shape would be imprinted onto the measured AGN fraction at these redshifts, with a predicted fainter transition magnitude and extended tail of highly luminous SF galaxies. This prediction is consistent with the tentative detection of weak AGN features in the rest-frame UV spectra of moderately bright LBGs at $z\gtrsim 6$ (Laporte et al., 2017; Tilvi et al., 2016), as we predict a higher $f_{\rm AGN}$ to fainter magnitudes within this epoch. These detections however, are at odds with the expected number density of faint AGNs at $z\simeq 6$, which have been shown to undergo an accelerated decline at $z>5$ (McGreer et al., 2013; Jiang et al., 2016). From the extrapolated number densities of high-redshift AGN, we do not expect that current LBG samples at $z\geq 7$ will contain any AGN-dominated sources, as AGN are only expected to be more numerous than LBGs at $M_{\rm UV}\leq-24$ (see discussion in Bowler et al., 2014). This is consistent with the lack of point-sources found at $z\simeq 6$–$7$, where it has been possible to gain high-resolution imaging of the brightest sources with _HST_ (Jiang et al., 2013; Bowler et al., 2017). These somewhat conflicting results could be a result of weaker AGN residing within high-redshift LBGs, or misclassification of emission lines as AGN signatures. Taking the results of this study coupled with what is known about the evolving UV LFs to higher redshifts, we predict that AGN ‘contamination’ of rest-frame UV selection samples faint-ward of $M_{\rm UV}\simeq-23$ will be minimal at $z\geq 7$. ## 6 A simple model of the AGN UV LF We created a toy model of the predicted rest-frame UV LF of AGN to aid in the interpretation of our observations. The model takes the observed UV LF of LBGs and uses this, via simple empirical relations, to estimate the luminosity and number density of UV-bright AGN at the same epoch. ### 6.1 Method For each galaxy of a given absolute UV magnitude, we first estimate the stellar mass according to the relation found by Duncan et al. (2014): ${\rm log}_{10}(M_{\star})=(9.02\pm 0.02)\,(M_{\rm UV}+19.5)-(0.45\pm 0.02).$ (1) The slope and normalisation of this relation is consistent between different studies (e.g. Salmon et al., 2015; Song et al., 2016; see figure 5 of Tacchella et al., 2018). This relation has been derived in the past to determine the stellar mass functions at high-redshift from the rest-frame UV LF, where the effect of scatter is essential in order to reproduce the observed mass and LFs (Stark et al., 2013; Duncan et al., 2014). We therefore include a scatter of $0.4\,{\rm dex}$ in the relationship above, which is consistent with that required in Duncan et al. (2014) and is at the upper end of the measured intrinsic scatter in the $SFR$–$M_{\star}$ relation (Curtis- Lake et al., 2020; Salmon et al., 2015). From the stellar mass we then estimate the black hole mass using the $m_{\rm BH}-M_{\star}$ relation of the form: ${\rm log}_{10}(m_{\rm BH})=G_{\rm BH}\,[{\rm log}_{10}(M_{\star})+11.0]+I_{\rm BH}$ (2) Here $G_{\rm BH}=d{\rm log}_{10}(m_{\rm BH})/dM_{\star}$ gives the gradient of the relation and $I_{\rm BH}$ the intercept (defined at a mass of ${\rm log}_{10}(M_{\star}/{\rm M}_{\odot})=11.0$). The form or even existence of such a relationship at high-redshift is uncertain. We therefore consider two plausible scenarios based on previous results from both luminous quasars at high-redshift and low-redshift galaxies. The simplest scenario is one in which the black-hole mass is a constant fraction of the stellar mass at a given redshift. In this case (denoted model A hereafter) we set $G_{\rm BH}=1.0$ and $I_{\rm BH}=9.0+{\rm log}_{10}(1+z)$ to give $m_{\rm BH}/M_{\star}=0.05$ at $z=4$, as found in observations of high-redshift quasars (e.g. Venemans et al., 2017; Targett et al., 2012). The $1+z$ term follows observations and theoretical arguments for an increased $m_{\rm BH}$ to bulge-mass ratio at high-redshifts (e.g. Venemans et al., 2015; Croton, 2006; Wyithe & Loeb, 2003). Such high ratios of $m_{\rm BH}/M_{\star}$ may not be representative of the AGN population at this redshift (e.g due to selection effects), hence we treat this scenario as an extreme case. An alternative scenario is one in which black holes in more massive galaxies are over-developed, whereas those in less massive sources are a smaller fraction of the stellar mass. Such a scenario has been measured at low-redshift by Reines & Volonteri (2015). In this case (denoted as model B hereafter) we set $G_{\rm BH}=1.4$ and $I_{\rm BH}=8.95+{\rm log}_{10}(1+z)$. Scatter is also significant in the $m_{\rm BH}$-$M_{\star}$ relation (e.g.see Hirschmann et al., 2010; Volonteri & Reines, 2016), and we therefore include an intrinsic scatter of $0.3\,{\rm dex}$. The result of these steps is a relationship between the $M_{\rm UV}$ of a LBG and the estimated $m_{\rm BH}$. The $m_{\rm BH}$ can then be converted into an estimated bolometric luminosity using an assumption on the Eddington ratio. We assume a log-normal distribution of mean $\lambda=0.6$ and $\sigma=0.3\,{\rm dex}$ as found by Willott et al., 2010b (see also Kelly & Shen, 2013). Finally we convert the bolometric luminosity into a UV luminosity by assuming a bolometric correction (taken to be $4.4$; Runnoe et al., 2012; Mortlock et al., 2011). From a single $M_{\rm UV}$ from SF we thus obtain a spread in the predicted total absolute UV magnitude due to the addition of an unobscured black hole ($M_{\rm UV,BH}$). Figure 9: The predicted rest-frame UV LF of AGN as derived from our simple models. The upper plot shows the results for a model where $m_{\rm BH}/M_{\star}=0.05$ (model A), and the lower plot shows the results where $m_{\rm BH}\propto M_{\star}^{1.4}$ (model B). The grey dashed line shows the DPL fit to the rest-frame UV LF for star-forming galaxies from Adams et al. (2020). The result of applying empirical scaling relations, with scatter, to this galaxy LF results in the AGN prediction shown as the black line. The effect of scatter is highlighted as the gold shaded region, such that the lower edge of this shading would be the prediction with no scatter. The red solid line shows the predicted obscured Type II AGN LF, and the blue dotted line shows the expected LF for sources in which $L_{\rm BH}/L_{\rm SF}>12.5$. The red dashed line shows the predicted AGN LF without the UV emission from the host galaxy. The open circles show the measurements from Adams et al. (2020) and the grey filled circles show the results from Akiyama et al. (2018), who imposed a criterion on the source morphology. Following these steps resulted in a simulated AGN LF with a relative flat shape at the bright-end. This is in contrast with the knee in the function around $M_{\star}\sim-26$ found in observations. This effect arises in our model from the creation of infeasibly massive black holes and galaxies, due to the application of scatter in the relations between $M_{\rm UV}$, $M_{\star}$ and $m_{\rm BH}$. While the form of the bright-end of the AGN LF does not impact the results of this work, we nevertheless impose a crude cut in the stellar and black-hole masses to remove these unrealistic sources. We take a limiting stellar mass of ${\rm log}_{10}(M_{\star})=10.8$ from the characteristic mass of high-redshift galaxies (e.g. Ilbert et al., 2013; McLeod et al., 2020), and impose an upper limit on the black-hole mass of ${\rm log}_{10}(m_{\rm BH})=9.5$ to approximate the drop in the (uncertain) black-hole mass functions at $z\simeq 4$ (e.g. Shankar et al., 2009; Kelly & Merloni, 2012). If a model galaxy/black hole exceeds these mass limits due to scatter, we allocate a lower mass at random according to the relations described above. The result of this simple process is a knee in the AGN LF in good agreement with the observations. We calculate the results of this analysis for both the Schechter and DPL form presented in Adams et al. (2020), where they fitted to only points at $M_{\rm UV}>-22.0$ to ensure that the results were not influenced by AGN-dominated sources. The resulting AGN LF depends only weakly on the assumed shape of the LBG LF because the majority of the simulated AGN are hosted in galaxies with $M_{UV}>-22.0$ where there is good agreement between the Schechter and DPL fits. In contrast, the _ratio_ of AGN to SF-dominated sources at $M_{\rm UV}\sim-23$ does rely on the LBG LF shape, as it depends on how steeply the LBG LF drops-off at the bright-end (see Section 5). To obtain a predicted AGN LF that matches the number density of quasars known at $z\simeq 4$ we include two factors to modulate the number of LBGs that host an AGN. The first is the obscured fraction, which describes how many ‘on’ AGN are not bright in the rest-frame UV continuum (e.g. are obscured Type II AGN). We fixed $f_{\rm obsc.}=0.6$ (Ueda et al., 2014; Vito et al., 2018). The second modulating factor is the fraction of galaxies that host an active black hole, which is a proxy for the duty cycle of AGN activity. The $f_{\rm active}$ was determined in our model as the factor required to bring the predicted AGN LF in agreement with the observed AGN LF in the range $-27<M_{\rm UV}<-24$. We find $f_{\rm active}=0.0007$ for model A and $f_{\rm active}=0.003$ for model B. Note that $f_{\rm active}$ is not directly comparable to the duty cycle of AGN activity, as it is the fraction of AGN that appear bright in the rest-frame UV rather than the fraction of active black holes. ### 6.2 Results We present the results of this analysis in comparison with the observed LBG and AGN rest-frame UV LF in Fig. 9. Despite the simplicity of the model, it does a reasonable job at reproducing the shape of the observed $z\simeq 4$ AGN LF at luminosities bright-ward of $M_{\rm UV}\simeq-23.5$. A striking feature of the predicted LFs shown in Fig. 9 is the dominant role of scatter which is well known to be important from observations of the brightest quasars (e.g. Venemans et al., 2017; Willott et al., 2013; Targett et al., 2012). Furthermore, both models predict a similarly steep faint-end slope consistent with that found by other empirical predictions (e.g. Veale et al., 2014; Ren et al., 2020; Delvecchio et al., 2020). For sources around $M_{\rm UV}\simeq-23$, our two models give different predictions for the importance of scatter in the observed galaxies. This has consequences for the expected morphology and spectroscopic properties of sources around this magnitude. The AGN LFs from our toy model do not show the flattening observed in the results of Akiyama et al. (2018) faint-ward of $M_{\rm UV}\simeq-24$. Akiyama et al. (2018) used a moment based measure to select compact sources in ground-based HSC data, while our model does not make any assumption about the size or morphology of each model galaxy/AGN. If we impose the condition that the AGN luminosity must be a certain multiple of the stellar light (e.g. $L_{\rm BH}/L_{\rm SF}\gtrsim 10$) then we are able to reproduce the flattening found in the data, but only in the case where the black-hole mass is an increasing fraction of the stellar mass (model B). The magnitude of the cut is motivated by previous studies comparing the host galaxy and AGN emission in the rest- frame UV and optical for sources selected as quasars (typically $\Delta M=2$–$3\,{\rm mag}$; Jahnke et al., 2004; Schramm et al., 2008; Goto et al., 2009; Mechtley et al., 2016; Lawther et al., 2018) When fitting the Akiyama et al. (2018) points with the ratio as a free parameter, we found a best-fit ratio of $L_{\rm BH}/L_{\rm SF}=12.5\pm 0.5$ for model B. The drop in the AGN LF in this case is due to the host galaxy rest-frame UV light becoming significant at $M_{\rm UV}>-23$. Such a cut is incomplete to fainter AGN (relative to their host galaxy UV emission) and therefore results in an artificial flattening in the observed AGN LF as shown in Fig. 9. If instead black holes populate host galaxies as in our model A, where the effect of scatter in shaping the observed AGN LF at fainter magnitudes is dominant, then such a flattening is not predicted with this relative luminosity cut. The effect can also be seen in Fig. 10 where we compare the observed $f_{\rm AGN}$ to that predicted from our model. These results demonstrate that the morphological and spectroscopic properties of sources around $M_{\rm UV}\simeq-23$ gives important information about how active black holes are distributed within host galaxies. If the rest-frame UV is always dominated by light from the AGN, then selections based on a point-sources condition will be complete (model A). Such a selection will not be feasible to fainter magnitudes however, due to LBGs themselves becoming more compact. If instead the host galaxy light can become important at fainter magnitudes, as in our model B, then we see a distinct incompleteness in point-source selections for AGN at $M_{\rm UV}>-23$. In this case it becomes essential to define the relative ‘AGN-strength’ that is being included with a given selection methodology, to fully understand what population is being measured. Figure 10: The predicted AGN fraction as a function of absolute UV magnitude at $z=4$ from our toy model. The models are compared to the derived fraction from our morphology (blue circles) and spectroscopy (orange squares) data. The yellow dashed (solid) lines show the predicted $f_{\rm AGN}$ from model A (B) with all AGN included. The blue dot-dashed (dotted) line corresponds to the $f_{\rm AGN}$ if a luminosity cut of $L_{\rm BH}/L_{\rm SF}$ is imposed to model A (B), to identify AGN-dominated sources. We see that with this cut model B is able to reproduce our observed morphology-based AGN fraction. ### 6.3 Obscured Type-II AGN So far in this work we have considered only unobscured Type I AGN, which we expect to contribute significantly to the rest-frame UV continuum luminosity of the source. In the orientation-based unified model of AGN we also expect obscured Type II-like AGN, where the presence of an AGN is only observable in the UV via narrow emission lines. In our model of the AGN LF as derived from the LBG LF, we can predict the number density of these obscured sources. Interestingly, our preferred model (model B), which can better explain our observations of the $f_{\rm AGN}$ and the observed flattening of the faint-end slope found by Akiyama et al. (2018), also predicts an increased contribution of obscured AGN at $M_{\rm UV}>-22$. This is a natural consequence of our assumed active and obscured fractions of galaxies in the model. In model B we expect to see an increased contribution from obscured AGN at fainter magnitudes, with the number densities of ‘obscured’ and ‘unobscured’ sources becoming comparable (within a factor of $<5$) at $M_{\rm UV}\gtrsim-21$. While there are many assumptions and uncertainties in this prediction, we note that faint-ward of $M_{\rm UV}=-22$ we detect one source with clear AGN signatures in the compilation of spectra for our sample. This source, ID520330, appears to be an obscured Type II source (Fig. 6) at $M_{\rm UV}=-21$. From this one object we estimate the number density of obscured AGN at this magnitude is around $\phi=7\pm 7\times 10^{-6}\,/{\rm mag}/{\rm Mpc}^{3}$, which despite the huge uncertainties is within a factor of ten from our model prediction (model B: Fig. 9). These arguments demonstrate that there is still considerable uncertainty in the faint-end of the $z\simeq 4$ AGN UV LF depending on how AGN are defined and on the selection procedure. Given the large samples of $z\simeq 4$ sources available to-date from deep optical/NIR surveys (e.g. Adams et al., 2020; Ono et al., 2018; Bouwens et al., 2015), the next steps to overcome this challenge do not require substantial increases in sample size. Rather, in this work we have demonstrated that with a combination of magnitude limited spectroscopic follow-up, coupled with high-resolution imaging, it will be possible to probe the connection between faint-AGN and their galaxy hosts. ## 7 Conclusions We present the size, morphology and spectroscopic properties of a sample of $3.5<z<4.5$ galaxies and AGN selected based on a photometric redshift fitting analysis in Adams et al. (2020). The broad magnitude range probed by the parent sample ($-26\lesssim M_{\rm UV}\lesssim-20$) allows us to uniquely probe the transition between SF- and AGN-dominated sources. We use both ground-based and _HST_ imaging data to identify the changes in morphology and size, and archival spectra to detect signatures of AGN in the rest-frame UV spectrum. The key conclusions of this study are as follows. * • We find the expected galaxy size-luminosity relation up to an absolute UV magnitude of $M_{\rm UV}=-22.5$, beyond which we observe a steep downturn due to the increasing presence of objects with a point-source morphology. The effect is seen in both the high-resolution _HST_ $I_{814}$ imaging and the ground-based data. We find that brightest galaxies in the sample have a highly irregular structure as expected from previous works. * • The existence of archival spectra for a sub-set of our sample allows us to identify SF and AGN dominated sources from the rest-frame UV spectral signatures. At the bright-end of our sample we see clear AGN signatures in the available spectra, while deep spectroscopy from targeted high-redshift surveys show the expected features of LBGs. We identify a very bright source $M_{\rm UV}=-23.6$ ($SFR\simeq 80\,{\rm M}_{\odot}/{\rm yr}$) that shows no evidence for an AGN contribution to the rest-frame UV light. * • We combine the morphology/size and spectroscopy information to estimate the AGN fraction as a function of $M_{\rm UV}$. We find a steep transition at $M_{\rm UV}\simeq-23.2$ where the number of bright galaxies drops while AGN- dominated sources become ubiquitous. We find a slight tension in the $f_{\rm AGN}$ derived independently from our morphology and spectroscopy data at $M_{\rm UV}\simeq-22.5$, with the spectroscopy results finding a higher fraction by a factor of $\sim 5$. * • We use this AGN fraction to estimate the separated AGN and SF-dominated rest- frame UV LFs at $z\simeq 4$. We find the bright-end of the SF-dominated LF to be described by a DPL with a bright-end slope of $\beta=-5.15\pm 0.10$. Our LBG UV LF is consistent with that expected from the observed steepening in $\beta$ from $z\simeq 9$–$5$ found by Bowler et al. (2020), which can be explained by an increased effect of dust attenuation in the most highly star- forming galaxies. * • We find that the slope of the faint-end of the AGN LF depends on how we determine the AGN fraction. If we impose a point-source morphology criterion, as in several recent studies of faint AGN, then we find a shallow slope with $\alpha=-1.19\pm 0.05$. Conversely, if we derive the AGN number density using the spectroscopic results we find a steeper slope of $\alpha=-1.83\pm 0.11$. * • A simple model of the AGN LF, derived using empirical relations applied to the LBG UV LF at $z=4$, can provide a good description of the transition from AGN to SF-dominated sources. By applying a criterion on the relative emission from the AGN and host galaxy ($L_{\rm BH}/L_{\rm SF}>15$), we are able to reproduce the observed flattening of the $z=4$ AGN LF at $M_{\rm UV}<-22$ found by Akiyama et al. (2018). This flattening is only predicted in the case that the light from SF becomes significant in comparison to the AGN in less massive galaxies. Our results demonstrate that while the increasingly large samples of $z\simeq 4$ sources have resulted in low statistical errors on the rest-frame UV LF of AGN, there remain considerable systemic uncertainties on the faint-end of this function. In particular, the commonly imposed point-source criterion in the selection of AGN samples at these redshifts can result in incomplete samples of active sources at $M_{\rm UV}>-24$ due to the impact of the host galaxy. The degree of this incompleteness depends on how active black holes populate the underlying galaxy distribution and how these active sources appear in the rest-frame UV light accessible in optical datasets. Upcoming wide-area high- resolution imaging (e.g. from _Euclid_ ; Laureijs et al., 2012) with extensive spectroscopic follow-up (e.g. from degree-scale multi-object spectrographs like the William Herschel Telescope Enhanced Area Velocity Explorer; Dalton et al., 2012 and the Multi-Object Optical and Near-infrared Spectrograph; Cirasuolo, 2014) will be a powerful combination to understand further the co-evolution of galaxies and AGN at high redshifts. ## Acknowledgements We acknowledge useful discussions with Fergus Cullen, Paul Hewett, Manda Banerji and the ‘Quasar Souls’ group at the Institute of Astronomy at the University of Cambridge. We acknowledge Kate Gould for compiling the archival spectra. We thank the anonymous referee for comments that improved this paper. This work was supported by the Glasstone Foundation and the Oxford Hintze Centre for Astrophysical Surveys which is funded through generous support from the Hintze Family Charitable Foundation. Funding for the SDSS and SDSS-II has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, the U.S. Department of Energy, the National Aeronautics and Space Administration, the Japanese Monbukagakusho, the Max Planck Society, and the Higher Education Funding Council for England. The SDSS Web Site is http://www.sdss.org/. The SDSS is managed by the Astrophysical Research Consortium for the Participating Institutions. The Participating Institutions are the American Museum of Natural History, Astrophysical Institute Potsdam, University of Basel, University of Cambridge, Case Western Reserve University, University of Chicago, Drexel University, Fermilab, the Institute for Advanced Study, the Japan Participation Group, Johns Hopkins University, the Joint Institute for Nuclear Astrophysics, the Kavli Institute for Particle Astrophysics and Cosmology, the Korean Scientist Group, the Chinese Academy of Sciences (LAMOST), Los Alamos National Laboratory, the Max-Planck-Institute for Astronomy (MPIA), the Max-Planck-Institute for Astrophysics (MPA), New Mexico State University, Ohio State University, University of Pittsburgh, University of Portsmouth, Princeton University, the United States Naval Observatory, and the University of Washington. This research uses data from the VIMOS VLT Deep Survey, obtained from the VVDS database operated by Cesam, Laboratoire d’Astrophysique de Marseille, France. This research has made use of the zCosmos database, operated at CeSAM/LAM, Marseille, France. ## Data Availability The datasets used in this work were derived from sources in the public domain. Links to the online repositories and references to the survey data we utilized are listed in Section 2. ## References * Adams et al. (2020) Adams N. J., Bowler R. A. A., Jarvis M. J., Häußler B., McLure R. J., Bunker A., Dunlop J. S., Verma A., 2020, MNRAS, 494, 1771 * Akiyama et al. (2018) Akiyama M., et al., 2018, PASJ, 70, S34 * Alexandroff et al. (2013) Alexandroff R., et al., 2013, MNRAS, 435, 3306 * Bañados et al. (2016) Bañados E., et al., 2016, ApJSS, 227, 11 * Bañados et al. (2018) Bañados E., et al., 2018, Nat, 553, 473 * Barone-Nugent et al. (2015) Barone-Nugent R. L., Wyithe J. S. B., Trenti M., Treu T., Oesch P., Bouwens R., Illingworth G. D., Schmidt K. B., 2015, MNRAS, 450, 1224 * Bertin & Arnouts (1996) Bertin E., Arnouts S., 1996, A&AS, 117, 393 * Bian et al. (2013) Bian F., et al., 2013, ApJ, 774, 28 * Boutsia et al. (2018) Boutsia K., Grazian A., Giallongo E., Fiore F., Civano F., 2018, ApJ, 869, 20 * Bouwens et al. (2014) Bouwens R. J., et al., 2014, ApJ, 793, 115 * Bouwens et al. (2015) Bouwens R. J., et al., 2015, ApJ, 803, 34 * Bower et al. (2012) Bower R. G., Benson A. J., Crain R. A., 2012, MNRAS, 422, 2816 * Bowler et al. (2012) Bowler R. A. A., et al., 2012, MNRAS, 426, 2772 * Bowler et al. (2014) Bowler R. A. A., et al., 2014, MNRAS, 440, 2810 * Bowler et al. (2015) Bowler R. A. A., et al., 2015, MNRAS, 452, 1817 * Bowler et al. (2017) Bowler R., Dunlop J., McLure R., McLeod D., 2017, MNRAS, 466, 3612 * Bowler et al. (2020) Bowler R. A. A., Jarvis M. J., Dunlop J. S., McLure R. J., McLeod D. J., Adams N. J., Milvang-Jensen B., McCracken H. J., 2020, MNRAS, 2084, 2059 * Cirasuolo (2014) Cirasuolo M., 2014, SPIE, 9147, 91470N * Clay et al. (2015) Clay S., Thomas P., Wilkins S., Henriques B., 2015, MNRAS, 415, 2692 * Coil et al. (2011) Coil A. L., et al., 2011, ApJ, 741, 8 * Cowie et al. (2020) Cowie L. L., Barger A. J., Bauer F. E., González-López J., 2020, ApJ, 891, 69 * Croton (2006) Croton D. J., 2006, MNRAS, 369, 1808 * Curtis-Lake et al. (2016) Curtis-Lake E., et al., 2016, MNRAS, 457, 440 * Curtis-Lake et al. (2020) Curtis-Lake E., Chevallard J., Charlot S., 2020, preprint (arXiv:2001.08560) * Dalton et al. (2012) Dalton G., et al., 2012, SPIE, 8446, 84460P * Dayal et al. (2014) Dayal P., Ferrara A., Dunlop J. S., Pacucci F., 2014, MNRAS, 445, 2545 * Delvecchio et al. (2020) Delvecchio I., et al., 2020, ApJ, 892, 17 * Duncan et al. (2014) Duncan K., et al., 2014, MNRAS, 444, 2960 * Eisenstein et al. (2011) Eisenstein D. J., et al., 2011, AJ, 142, 24 * Fan et al. (2003) Fan X., et al., 2003, AJ, 125, 1649 * Fèvre et al. (2019) Fèvre O. L., Béthermin M., Faisst A., Capak P., Cassata P., Silverman J. D., Schaerer D., Yan L., 2019, preprint(arXiv:1910.09517) * Finkelstein et al. (2015) Finkelstein S. L., et al., 2015, ApJ, 810, 71 * Gavignaud et al. (2006) Gavignaud I., et al., 2006, A&A, 457, 79 * Giallongo et al. (2015) Giallongo E., et al., 2015, A&A, 578, A83 * Giallongo et al. (2019) Giallongo E., et al., 2019, ApJ, 884, 19 * Gonzalez-Perez et al. (2013) Gonzalez-Perez V., Lacey C. G., Baugh C. M., Frenk C. S., Wilkins S. M., 2013, MNRAS, 429, 1609 * Goto et al. (2009) Goto T., Utsumi Y., Furusawa H., Miyazaki S., Komiyama Y., 2009, MNRAS, 400, 843 * Grogin et al. (2011) Grogin N. A., et al., 2011, ApJS, 197, 35 * Hasinger et al. (2018) Hasinger G., et al., 2018, ApJ, 858, 77 * Hathi et al. (2010) Hathi N. P., et al., 2010, ApJ, 720, 1708 * Hirschmann et al. (2010) Hirschmann M., Khochfar S., Burkert A., Naab T., Genel S., Somerville R. S., 2010, MNRAS, 407, 1016 * Huang et al. (2013) Huang K.-H., Ferguson H. C., Ravindranath S., Su J., 2013, ApJ, 765, 68 * Ikeda et al. (2012) Ikeda H., et al., 2012, ApJ, 756, 160 * Ilbert et al. (2013) Ilbert O., et al., 2013, A&A, 556, A55 * Jahnke et al. (2004) Jahnke K., et al., 2004, ApJ, 614, 568 * Jiang et al. (2013) Jiang L., et al., 2013, ApJ, 773, 153 * Jiang et al. (2016) Jiang L., et al., 2016, ApJ, 833 * Kashikawa et al. (2015) Kashikawa N., et al., 2015, ApJ, 798, 28 * Kelly & Merloni (2012) Kelly B. C., Merloni A., 2012, Adv. in Ast., 2012 * Kelly & Shen (2013) Kelly B. C., Shen Y., 2013, ApJ, 764 * Kim et al. (2019) Kim Y., et al., 2019, ApJ, 870, 86 * Koekemoer et al. (2007) Koekemoer A. M., et al., 2007, ApJS, 172, 196 * Koekemoer et al. (2011) Koekemoer A. M., et al., 2011, ApJS, 197, 36 * Laporte et al. (2017) Laporte N., et al., 2017, ApJ, 837, L21 * Laureijs et al. (2012) Laureijs R., et al., 2012, in Clampin M. C., Fazio G. G., MacEwen H. A., Oschmann J. M., eds, Vol. 8442, Space Telescopes and Instrumentation 2012: Optical. p. 84420T, doi:10.1117/12.926496, http://adsabs.harvard.edu/abs/2012SPIE.8442E..0TL * Law et al. (2012) Law D. R., Steidel C. C., Shapley A. E., Nagy S. R., Reddy N. A., Erb D. K., 2012, ApJ, 745, 85 * Lawther et al. (2018) Lawther D., Vestergaard M., Fan X., 2018, MNRAS, 475, 3213 * Le Fèvre et al. (2015) Le Fèvre O., et al., 2015, A&A, 576, A79 * Lee et al. (2011) Lee K.-s., et al., 2011, ApJ, 733, 99 * Lilly et al. (2007) Lilly S. J., et al., 2007, ApJS, 172, 70 * Lotz et al. (2006) Lotz J. M., Madau P., Giavalisco M., Primack J., Ferguson H. C., 2006, ApJ, 636, 592 * Madau et al. (1998) Madau P., Pozzetti L., Dickinson M., 1998, ApJ, 498, 106 * Mason et al. (2015) Mason C. A., et al., 2015, ApJ, 805, 79 * Massey et al. (2010) Massey R., Stoughton C., Leauthaud A., Rhodes J., Koekemoer A., Ellis R., Shaghoulian E., 2010, MNRAS, 401, 371 * Masters et al. (2012) Masters D., et al., 2012, ApJ, 752, L14 * Matsuoka et al. (2018a) Matsuoka Y., et al., 2018a, ApJS, 237, 5 * Matsuoka et al. (2018b) Matsuoka Y., et al., 2018b, ApJ, 869, 150 * Matute et al. (2013) Matute I., Masegosa J., Márquez I., Husillos C., Olmo A., Perea J., Povi M., 2013, A&A, 557, A78 * McGreer et al. (2013) McGreer I. D., et al., 2013, ApJ, 768, 105 * McGreer et al. (2018) McGreer I., Fan X., Jiang L., Cai Z., 2018, ApJ, 155, 131 * McLeod et al. (2020) McLeod D. J., McLure R. J., Dunlop J. S., Cullen F., Carnall A. C., Duncan K., 2020, 24, 1 * McLure et al. (2013) McLure R. J., et al., 2013, MNRAS, 432, 2696 * McLure et al. (2018) McLure R. J., et al., 2018, MNRAS, 479, 25 * Mechtley et al. (2016) Mechtley M., et al., 2016, ApJ, 830, 156 * Mortlock et al. (2011) Mortlock D. J., et al., 2011, Nat, 474, 616 * Oke (1974) Oke J. B., 1974, ApJS, 27, 21 * Oke & Gunn (1983) Oke J. B., Gunn J. E., 1983, ApJ, 266, 713 * Ono et al. (2018) Ono Y., et al., 2018, PASJ, 70, S10 * Parsa et al. (2016) Parsa S., Dunlop J. S., McLure R. J., Mortlock A., 2016, MNRAS, 456, 3194 * Parsa et al. (2018) Parsa S., Dunlop J. S., McLure R. J., 2018, MNRAS, 474, 2904 * Pentericci et al. (2018) Pentericci L., et al., 2018, A&A, 616, A174 * Reines & Volonteri (2015) Reines A. E., Volonteri M., 2015, ApJ, 813, 82 * Ren et al. (2020) Ren K., Trenti M., Di Matteo T., 2020, ApJ, 894, 124 * Richards et al. (2002) Richards G. T., et al., 2002, AJ, 123, 2945 * Richards et al. (2006) Richards G. T., et al., 2006, AJ, 131, 2766 * Rogers et al. (2014) Rogers A. B., et al., 2014, MNRAS, 440, 3714 * Runnoe et al. (2012) Runnoe J. C., Brotherton M. S., Shang Z., 2012, MNRAS, 422, 478 * Salmon et al. (2015) Salmon B., et al., 2015, ApJ, 799, 183 * Schramm et al. (2008) Schramm M., Wisotzki L., Jahnke K., 2008, A&A, 478, 311 * Scoville et al. (2007) Scoville N., et al., 2007, ApJS, 172, 1 * Shankar et al. (2009) Shankar F., Weinberg D. H., Miralda-Escudé J., 2009, ApJ, 690, 20 * Shapley et al. (2003) Shapley A. E., Steidel C. C., Pettini M., Adelberger K. L., 2003, ApJ, 588, 65 * Shen et al. (2003) Shen S., Mo H. J., White S. D. M., Blanton M. R., Kauffmann G., Voges W., Brinkmann J., Csabai I., 2003, MNRAS, 343, 978 * Shin et al. (2020) Shin S., et al., 2020, ApJ, 893, 45 * Sobral et al. (2018) Sobral D., et al., 2018, MNRAS, 482, 2422 * Song et al. (2016) Song M., et al., 2016, ApJ, 825, 5 * Stark et al. (2013) Stark D. P., Schenker M. A., Ellis R., Robertson B., McLure R., Dunlop J., 2013, ApJ, 763, 129 * Stark et al. (2014) Stark D. P., et al., 2014, MNRAS, 445, 3200 * Steidel et al. (2016) Steidel C. C., Strom A. L., Pettini M., Rudie G. C., Reddy N. A., Trainor R. F., 2016, ApJ, 826, 159 * Stevans et al. (2018) Stevans M. L., et al., 2018, ApJ, 863, 63 * Tacchella et al. (2018) Tacchella S., Bose S., Conroy C., Eisenstein D. J., Johnson B. D., 2018, ApJ, 868, 92 * Targett et al. (2012) Targett T. A., Dunlop J. S., Mclure R. J., 2012, MNRAS, 420, 3621 * Tilvi et al. (2016) Tilvi V., et al., 2016, ApJL, 827, L14 * Ueda et al. (2014) Ueda Y., Akiyama M., Hasinger G., Miyaji T., Watson M. G., 2014, ApJ, 786, 104 * Vanden Berk et al. (2001) Vanden Berk D. E., et al., 2001, AJ, 122, 549 * Veale et al. (2014) Veale M., White M., Conroy C., 2014, MNRAS, 445, 1144 * Venemans et al. (2015) Venemans B. P., Walter F., Zschaechner L., Decarli R., De Rosa G., Findlay J. R., McMahon R. G., Sutherland W. J., 2015, ApJ, 816, 37 * Venemans et al. (2017) Venemans B. P., et al., 2017, ApJ, 837, 146 * Vito et al. (2018) Vito F., et al., 2018, MNRAS, 473, 2378 * Volonteri & Reines (2016) Volonteri M., Reines A. E., 2016, ApJ, 820, L6 * Warren et al. (1994) Warren S. J., Hewett P. C., Osmer P. S., 1994, ApJ, 421, 412 * Willott et al. (2010a) Willott C. J., et al., 2010a, AJ, 139, 906 * Willott et al. (2010b) Willott C. J., et al., 2010b, AJ, 140, 546 * Willott et al. (2013) Willott C. J., Omont A., Bergeron J., 2013, ApJ, 770, 13 * Wyithe & Loeb (2003) Wyithe J. S. B., Loeb A., 2003, ApJ, 595, 614 * Yang et al. (2020) Yang J., et al., 2020, ApJ, 897, L14 * van der Burg et al. (2010) van der Burg R. F. J., Hildebrandt H., Erben T., 2010, A&A, 523, A74 ## Appendix A Spectroscopically confirmed sources In Table 2 we present the brightest sources in our parent sample that have been spectroscopically confirmed. In addition to the published spectroscopic redshift, for some of these sources we were able to obtain reduced spectra which we present in Fig. 5. As discussed further in Adams et al. (2020) we overlap with the majority of the Boutsia et al. (2018) sample. As part of our analysis we identified a discrepancy between the absolute magnitudes presented by Boutsia et al. (2018), which can be up to $1\,{\rm mag}$ fainter than the $M_{\rm UV}$ we calculated from our best-fitting SED model. In Boutsia et al. (2018) the $M_{\rm UV}$ is estimated by applying a $K$-correction to the $r-$band data, which typically hosts the Lyman-break from $z=3.9$–$4.7$. Our analysis demonstrates that this method can underestimate the $M_{\rm UV}$. In addition to this discrepancy, we find we cannot reproduce the higher number density of AGN derived by Boutsia et al. (2018) ($\phi=1.6\times 10^{-6}$ at $M_{\rm UV}=-23.5$). This is puzzling given that the majority of the Boutsia et al. (2018) sources are reselected in this work. Table 2: The spectroscopically confirmed high-redshift sources from the full $z\simeq 4$ sample. We present the objects with $M_{\rm UV}<-22.0$, with the COSMOS and XMM-LSS sources in the upper and lower part of the table respectively. The first column is the source ID number followed by the R.A. and Declination. In column 3 we present the total HSC $I$-band apparent magnitude followed by the best-fitting photometric redshift with a Galaxy and QSO template in columns 4 and 5. In brackets after the photometric redshift is the $\chi^{2}$ of the fit. The final two columns denote the absolute UV magnitude followed by a note indicating the origin of the spectroscopic measurement. KB18 corresponds to Boutsia et al. (2018), zCOS to zCOSMOS, PR to Primus, VAN to VANDELS. We denote with an asterick the objects for which we were able to obtain the rest-frame UV spectrum, presented in Section 3.3. ID | R.A. | Dec. | $I_{\rm HSC}$ | $z_{\rm gal}$ | $z_{\rm qso}$ | $z_{\rm spec}$ | $M_{\rm UV}$ | Notes ---|---|---|---|---|---|---|---|--- 188657 | 9:57:52.16 | +1:51:20.08 | 21.20 | 0.27 (39.0) | 3.92 (20.1) | 4.174 | -24.90 | KB18(658294),zCOS* 203718 | 10:01:56.55 | +1:52:18.80 | 21.88 | 4.19 (70.3) | 4.28 (50.3) | 4.447 | -24.18 | zCOS,PR* 657658 | 10:02:48.91 | +2:22:11.88 | 21.69 | 3.75 (96.0) | 3.52 (151.7) | 3.748 | -24.12 | KB18(1163086),zCOS,PR* 702265 | 10:00:24.23 | +2:25:09.86 | 22.55 | 4.28 (123.4) | 4.36 (98.0) | 4.596 | -23.94 | PR 153468 | 9:58:08.09 | +1:48:33.10 | 22.40 | 3.92 (19.1) | 3.88 (25.4) | 3.986 | -23.75 | KB18(664641) 113309 | 10:00:25.77 | +1:45:33.11 | 22.40 | 3.79 (126.7) | 4.16 (87.0) | 4.140 | -23.72 | KB18(330806) 677759 | 10:02:33.23 | +2:23:28.74 | 22.45 | 3.50 (123.0) | 3.72 (58.1) | 3.650 | -23.43 | KB18(1159815) 724788 | 9:59:06.46 | +2:26:39.39 | 22.35 | 3.73 (122.8) | 3.92 (81.7) | 4.170 | -23.42 | KB18(1273346),PR 523406 | 9:59:31.01 | +2:13:32.88 | 22.46 | 3.48 (70.0) | 3.60 (52.7) | 3.650 | -23.37 | KB18(1054048) 908052 | 9:59:22.37 | +2:39:32.63 | 23.49 | 3.72 (76.3) | 4.08 (32.9) | 3.748 | -22.99 | KB18(1730531) 840823 | 10:00:54.52 | +2:34:34.90 | 23.75 | 4.43 (15.1) | 4.56 (36.7) | 4.539 | -22.57 | DEIMOS(842313) 112866 | 10:01:26.67 | +1:45:26.16 | 23.87 | 4.44 (9.2) | 4.60 (11.6) | 5.137 | -22.49 | DEIMOS(308643) 654636 | 10:01:31.60 | +2:21:57.73 | 24.01 | 4.48 (8.6) | 4.52 (25.1) | 4.511 | -22.34 | VUDS(5101210235) 606304 | 10:01:12.50 | +2:18:52.58 | 24.09 | 4.46 (60.9) | 4.44 (32.0) | 5.691 | -22.27 | VUDS(5101218326) 697618 | 10:01:19.91 | +2:24:47.47 | 24.21 | 4.46 (2.3) | 4.56 (8.4) | 4.419 | -22.14 | DEIMOS(733857) 278034 | 2:18:44.46 | -4:48:24.59 | 19.74 | 4.13 (144.0) | 4.44 (98.3) | 4.574 | -26.48 | SDSS* 1364622 | 2:27:54.62 | -4:45:35.37 | 20.31 | 3.60 (48.6) | 3.72 (97.3) | 3.741 | -25.64 | SDSS* 919928 | 2:24:13.41 | -5:27:24.73 | 20.31 | 3.56 (44.2) | 3.80 (24.2) | 3.779 | -25.55 | SDSS* 1448906 | 2:25:27.23 | -4:26:31.21 | 21.32 | 3.54 (66.1) | 3.68 (29.3) | 3.835 | -24.57 | PR, VVDS* 45737 | 2:18:05.65 | -5:26:35.58 | 21.75 | 3.77 (45.5) | 3.64 (102.1) | 4.077 | -24.21 | PR 456332 | 2:17:14.17 | -4:20:00.54 | 22.30 | 0.47 (65.4) | 4.36 (52.2) | 4.317 | -24.12 | PR 307263 | 2:18:31.37 | -4:43:54.39 | 21.88 | 3.42 (32.0) | 3.64 (13.9) | 3.683 | -24.03 | PR 1448401 | 2:27:54.45 | -4:26:37.97 | 22.29 | 3.63 (24.3) | 3.64 (66.4) | 3.835 | -23.62 | VVDS* 173860 | 2:17:34.38 | -5:05:14.55 | 23.25 | 0.35 (63.9) | 3.88 (28.2) | 3.983 | -22.72 | VAN(199159)* 75407 | 2:17:53.11 | -5:21:24.40 | 23.67 | 0.35 (14.0) | 4.20 (10.0) | 3.802 | -22.64 | VAN(141491)* 1499379 | 2:25:33.71 | -4:15:41.51 | 23.68 | 0.42 (21.7) | 4.28 (16.5) | 3.699 | -22.62 | VVDS* 90440 | 2:18:05.17 | -5:18:55.74 | 23.67 | 0.41 (15.5) | 4.16 (9.6) | 3.921 | -22.58 | VAN(150302)* 1463494 | 2:27:53.87 | -4:23:20.34 | 23.37 | 3.36 (69.4) | 3.56 (54.2) | 3.626 | -22.42 | VVDS* 1485639 | 2:26:59.61 | -4:18:32.88 | 23.67 | 3.82 (10.1) | 4.08 (15.8) | 3.872 | -22.36 | VVDS* 140721 | 2:17:33.77 | -5:10:24.57 | 23.76 | 3.98 (13.4) | 4.20 (22.3) | 4.129 | -22.33 | VAN(018574)* 1522259 | 2:25:33.61 | -4:10:57.99 | 23.79 | 4.12 (20.6) | 4.28 (15.9) | 4.116 | -22.32 | VVDS* ## Appendix B AGN fraction from our fitting In Fig. 11 we show the resulting AGN fractions derived from our fitting procedure to the separated AGN and SF UV LFs. We compare these to the results from Adams et al. (2020) who did not separate the two populations. In comparison to Adams et al. (2020) we find that the DPL and Schechter parameterisations of the SF-dominated sources are consistent within $1\sigma$. The observed differences in the $f_{\rm AGN}$ are instead found to be due to more significant changes in the derived parameters for the power-law fit to the AGN-dominated sub-sample. When a Schechter function form is assumed for the SF-dominated sources, the fit to the AGN-dominated objects has a shallower slope than in Adams et al. (2020), leading to a brighter point of transition. Conversely, when a DPL function form is assumed for the SF-dominated sources, the power-law fit to the AGN-dominated objects shows a higher normalisation and hence the transition point (where $f_{\rm AGN}\simeq 0.5$) moves faint- wards. Figure 11: The results of our fitting to the separated AGN and SF-dominated sources. The solid (dashed) lines show the $f_{\rm AGN}$ derived when assuming a DPL (Schechter) function for the SF component. In both cases a power-law is assumed for the AGN component. The black lines show the results from a simultaneous fit of both types of sources by Adams et al. (2020). The blue (orange) lines show the results from this work, where we have separated SF and AGN-dominated sources according to the morphology (spectra). We reproduce our observed $f_{\rm AGN}$ points as in Fig. 7.
16k
arxiv_papers
2101.01197
# Stromlo Stellar Tracks: non-solar scaled abundances for massive stars K. Grasha [email protected] Research School of Astronomy and Astrophysics, Australian National University, Canberra, ACT 2611, Australia ARC Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D), Australia A. Roy Research School of Astronomy and Astrophysics, Australian National University, Canberra, ACT 2611, Australia ARC Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D), Australia R. S. Sutherland Research School of Astronomy and Astrophysics, Australian National University, Canberra, ACT 2611, Australia ARC Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D), Australia L. J. Kewley Research School of Astronomy and Astrophysics, Australian National University, Canberra, ACT 2611, Australia ARC Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D), Australia ###### Abstract We present the Stromlo Stellar Tracks, a set of stellar evolutionary tracks, computed by modifying the Modules for Experiments in Stellar Astrophysics (MESA) 1D stellar evolution package, to fit the Galactic Concordance abundances for hot ($\mathrm{T}>8000$ K) massive ($\geq 10$ M⊙) Main-Sequence (MS) stars. Until now, all stellar evolution tracks are computed at solar, scaled-solar, or alpha-element enhanced abundances, and none of these models correctly represent the Galactic Concordance abundances at different metallicities. This paper is the first implementation of Galactic Concordance abundances to the stellar evolution models. The Stromlo tracks cover massive stars ($10\leq M/M_{\odot}\leq 300$) with varying rotations ($v/v_{\rm crit}=0.0,0.2,0.4$) and a finely sampled grid of metallicities ($-2.0\leq{\rm[Z/H]}\leq+0.5$; $\Delta{\rm[Z/H]}=0.1$) evolved from the pre- main sequence to the end of 12Carbon burning. We find that the implementation of Galactic Concordance abundances is critical for the evolution of main- sequence, massive hot stars in order to estimate accurate stellar outputs (L, Teff, $g$), which, in turn, have a significant impact on determining the ionizing photon luminosity budgets. We additionally support prior findings of the importance that rotation plays on the evolution of massive stars and their ionizing budget. The evolutionary tracks for our Galactic Concordance abundance scaling provide a more empirically motivated approach than simple uniform abundance scaling with metallicity for the analysis of H ii regions and have considerable implications in determining nebular emission lines and metallicity. Therefore, it is important to refine the existing stellar evolutionary models for comprehensive high-redshift extragalactic studies. The Stromlo tracks are publicly available to the astronomical community online. stars: evolution — stars: general — stars: massive — stars: rotation — stars: abundances — ISM: abundances ††software: Astropy (Astropy Collaboration et al., 2013, 2018), iPython (Pérez & Granger, 2007), Matplotlib (Hunter, 2007), Numpy (van der Walt et al., 2011; Harris et al., 2020), scipy (Jones et al., 2001), MESA (Paxton et al., 2011). ## 1 Introduction Our understanding of stellar physics and our ability to create realistic and accurate stellar evolution models across a wide range of stellar parameters heavily impact our ability to create physically realistic stellar population synthesis and photoionization models to interpret galaxy spectra. Most stellar evolutionary models (e.g., BaSTI (Pietrinferni et al., 2004; Hidalgo et al., 2018); Geneva (Ekström et al., 2012); Padova (Girardi et al., 2004); Y2 (Yi et al., 2001, 2003; Demarque et al., 2004)) assume solar or scaled-solar abundances (Anders & Grevesse, 1989; Asplund et al., 2009). Two immediate problems arise from currently available stellar tracks. First, solar (Anders & Grevesse, 1989; Asplund et al., 2009) relative abundance ratios conflict observed abundance ratios in H ii regions in the Milky Way, Large Magellanic Cloud, or Small Magellanic Cloud at a given metallicity (Morel, 2009; Nieva & Przybilla, 2012; Nicholls et al., 2017). The solar-scaled and alpha-enhanced abundances predict non-realistic stellar quantity measurements (L, $g$, Teff, etc) and thus will predict non-realistic ionizing photon budgets when used to interpret observations. This is critically important as solar-scaled stellar evolutionary tracks are in general not well matched to stellar observations and subsequent nebular emission modeling (see Przybilla, 2008; Morel, 2009; Nicholls et al., 2017; Cazorla et al., 2017; Kewley et al., 2019a). The implementation of abundance patterns is especially critical in massive, hot stars that dominate the excitation sources for H ii regions. This is especially important because nearby OB stars quite often exhibit metal abundances that are generally lower than solar estimates (Morel, 2009). Iron abundances relative to $\alpha$-element abundances change as a function of time and with the galactic environment (Wyse & Gilmore, 1993) and this systematic variation of [$\alpha$/Fe] needs to be taken into account at different metallicities. It is not just iron that changes with metallicity – a complete census on the chemical history of other elements and their evolution with overall metallicity is critical to determine accurate metallicity measurements in galaxies, especially at high redshift. The [NII]/[OII] ratio is an ideal abundance diagnostic (Kewley & Dopita, 2002), but at high redshift the [OII]$\lambda\lambda$3727, 3729 doublet is often unobservable, relying on calibrations based on [NII]/H$\alpha$ (Denicoló et al., 2002) and/or the [NII]/[OIII] ratio (Pettini & Pagel, 2004). Often yet, only red line ratios are available such as H$\alpha$, [NII], and [SII], and the O/H ratio relies on indirect methods using combinations of these line ratios (Dopita et al., 2016). Second, coarse metallicity grids are usually assumed in the calculation of stellar evolutionary tracks. These coarse metallicity griding results in non- accurate values while interpolating different stellar quantities (L, $g$, Teff, etc) for the metallicities falling in between the grid metallicities. The coarse metallicity grids limit the resolution at which theoretical strong- line metallicity diagnostics can be reliably calculated (Kewley et al., 2019b, a). Interpolation of metallicity diagnostics in coarse metallicity grids prevents accurate measurements of nebular emission lines for star-forming galaxies and potentially has important consequences for the interpretation of galactic properties of high-redshift galaxies such as their star formation rates (SFRs; e.g., Kewley et al., 2004) and ionization parameters which show metallicity dependence in ionization parameter diagnostics (Kewley & Dopita, 2002; Kewley et al., 2019a). Accurate determination of the chemical evolutionary state of distant galaxies will be critically important for next generation observatories such as JWST that will reveal the first galaxies. These objects will not be physically represented with current stellar evolutionary models at Solar or alpha- enhanced abundances and will predict non-realistic stellar quantities and thus, ionizing photon budgets. New stellar evolution models that are not limited to Solar or alpha-enhanced abundance ratios are critically needed. New stellar evolution models and their opacities additionally need to be calculated at much finer metallicity intervals of $\sim$0.1–0.2 dex in log(O/H) to avoid non-accurate values while interpolating different stellar quantities. These new tracks will allow for consistent abundance ratios to be used in stellar population synthesis and photoionization models to derive accurate, high-resolution metallicity diagnostics for the first time. This paper will enable the future development of atmosphere and photoionization nebular modeling with the same physical inputs to self- consistently predict the emission spectra arising from ionized nebula and central ionizing stellar population. This paper is the first in this series and presents the stellar evolutionary tracks using Galactic Concordance abundances to be used in modern stellar population synthesis, atmosphere, and photoionization models. The Galactic Concordance reference standard and scaling system serves as an empirical basis for interpreting observations and determining the physical conditions of H ii regions. The paper is organized as follows. In Section 2, we describe the numerical methods adopted in our stellar evolution models. Section 3 reports the results and comparison between the Stromlo non-uniform abundance stellar tracks and the scaled-solar MIST stellar tracks. We discuss the impact of the abundance ratio implementation on the photon ionizing budget in Section 4. We conclude and summarize our results in Section 5. ## 2 Method: stellar evolution calculation In order to explore the impact of non-uniform scaled abundances on stellar evolutionary tracks, we construct the Stromlo Stellar Tracks111The Stromlo Stellar Tracks are publicly available online at https://sites.google.com/view/stromlotracks, self-consistent stellar evolution models using the Modules for Experiments in Stellar Astrophysics (MESA222http://mesa.sourceforge.net; Paxton et al., 2011, 2013, 2015, 2018, 2019) stellar evolution code. We build upon the stellar evolutionary models used in the MESA Isochrones and Stellar Tracks (MIST) by Choi et al. (2016) to present models with the scaling abundances based on Milky Way stellar abundance data referred to as ‘Galactic Concordance’ (Nicholls et al., 2017). We focus on massive ($>$10 $M_{\odot}$) hot stars that dominate the ionizing budget that power H ii regions and provides the feedback that regulates the efficiency of star formation (Mckee & Ostriker, 2007; Krumholz et al., 2012; Hopkins et al., 2012) and drives turbulence and wind outflows. We configure MESA to include all the same physical processes and parameters values as used in the MIST library as described by Choi et al. (2016) with modifications to improve the treatment of massive stars as outline by Roy et al. (2020), which includes our inclusion of Galactic Concordance abundances to replace replace commonly adopted uniform scaling abundances (Section 2.1), radiative opacities (Section 2.2), and our treatment of mixing mechanisms (Section 2.3). The software used to generate the Stromlo Tracks is the GALCON- HOT-MIST package, to be described in Roy et al. (in preparation). For this work, we use MESA version v9793 compiled with GNU Fortran version 7.2.0 installed as part of MESA SDK333http://www.astro.wisc.edu/~townsend/static.php?ref=mesasdk. All of our MESA calculations implement the same spatial and temporal resolution conditions as adopted by Choi et al. (2016) for MIST. Below we briefly outline the parameters in MESA/MIST immediately pertinent to this work and we refer the reader to Paxton et al. (2011, 2013, 2015) for more detailed information on MESA and Choi et al. (2016) for more detailed information regarding the physical processes in MIST. For detailed information regarding the impact of our different setups to the MIST models for high mass stars and associated uncertainties in various parameter choices and their effects, we refer the reader to Roy et al. (2020). ### 2.1 Non-solar elemental abundances We adopt the non-solar abundance standard developed by Nieva & Przybilla (2012) and Nicholls et al. (2017) referred to as ‘Galactic Concordance’ abundances. Galactic concordance is based on the observed metallicities of 29 main-sequence B-stars in local galactic region (Nieva & Przybilla, 2012) and is augmented with elements that are of minor importance in nebular and stellar modeling (Nicholls et al., 2017). Galactic Concordance abundances are representative of present day, nearby massive stars (Nieva & Przybilla, 2012) and we adopt the scaling method in order to scale elements consistently down to [Fe/H] = $-2$ using the scaling relation of Nicholls et al. (2017). The Galactic Concordance reference standard and scaling method provides a reliable present-day cosmic abundance reference points for anchoring chemical evolution models to observation. Most importantly for this work and future work on creating self-consistent atmospheres and nebular models based on the same abundance, Galactic Concordance allows for linking the stellar abundance scale to the nebular abundance scale. As elements have been observed to vary systematically with [Fe/H], unlike MIST, we do not assume that all elements uniformly scale with Fe and we adopt the non-uniform scaling relation for the abundances of Nicholls et al. (2017). We adopt the same linear fits for the elemental scaling parameter given in Nicholls et al. (2017) and use stellar abundance measurements using iron as the reference scale in all our models. The piecewise linear fit for iron-based scaling for the $\alpha$ and $\alpha$-like elements X is calculated as follows: $\displaystyle[\rm{X/Fe}]$ $\displaystyle=+\Xi_{\rm Fe}\quad$ $\displaystyle-2.5<[\rm{Fe/H}]<-1.0$ $\displaystyle[\rm{X/Fe}]$ $\displaystyle=-\Xi_{\rm Fe}\times[\rm{Fe/H}]\quad$ $\displaystyle-1.0<[\rm{Fe/H}]<+0.5$ $\displaystyle[\rm{X/Fe}]$ $\displaystyle=-\Xi_{\rm Fe}\times 0.5\quad$ $\displaystyle[\rm{Fe/H}]>+0.5$ (1) where $\Xi_{\rm Fe}$ is the iron-based scaling factor for each element, listed in Table 2 of Nicholls et al. (2017). The elements H, He, Li, Be and B are not described by the $\Xi$ parameter, because hydrogen is the reference element, we assume helium scales simply with the oxygen scaling factor. Carbon and nitrogen are not well described by a simple piecewise linear fit because of the complexities of primary and secondary enrichment and are fit via the expression: $\log(\rm{X/O})=\log\left(10^{a}+10^{[\log(\rm{O/H})+b]}\right),$ (2) where for carbon a = $-$0.8, b = 2.72, and for nitrogen a = $-$1.732, b = 2.19. For fluorine, chlorine, neon and argon, there are no extensive stellar abundance scaling data and we assume their abundances scale with oxygen. ### 2.2 Radiative opacities The radiative opacity tables implemented in MESA are divided into two temperature regimes and are treated separately, high ($\log T/K>$ 4) and low ($\log T/K<$ 4) temperatures. Our model grids are all computed in the high mass regime $M>10$ M⊙ and thus we only use the high temperature opacity tables. The radiative opacities implemented in MESA for the high temperature regime are from OPAL (Rogers & Iglesias, 1992; Iglesias & Rogers, 1993, 1996) or OP (Seaton, 2005). Following Choi et al. (2016), we use OPAL opacities. The OPAL opacity tables in MESA and as incorporated in MIST are computed using Asplund et al. (2009) photospheric abundances. The Galactic Concordance non-uniform parametric scaling provides a more physically realistic approach than simple uniform abundance scaling with metallicity. For self-consistency in our stellar tracks, we calculate new OPAL444https://opalopacity.llnl.gov opacity tables for our tracks that use Galactic Concordance abundances. The OPAL opacity tables must be re-computed at each metallicity value in our stellar grid (Section 2.5) because we cannot assume uniform scaling as is traditionally done with the OPAL tables in MESA that assume solar abundances. ### 2.3 Mixing processes In stellar evolution codes, mixing describes the convective transport of energy within the stellar interior. We implement the Ledoux criterion for the convective mixing of elements as adopted in Choi et al. (2016). The only change in our mixing methods is the inclusion of the instability caused by magnetic torques by dynamo-generated fields referred to as Spruit Tayler (ST), which follows the standard approach used for high-mass stellar evolution in MESA (Heger et al., 2000, 2005). This is combined with the five rotationally induced instabilities MIST used: dynamical shear instability (DSI), secular shear instability (SSI), Solberg-Høiland (SH) instability, Eddington-Sweet (ES) circulation, and Goldreich-Schubert-Fricke (GSF) instability. The diffusion coefficients for these six rotational mixing processes are combined with the diffusion coefficients for non-rotational processes: convection, convection overshoot, semiconvective mixing, and thermohaline mising. The total sum of the angular momentum and abundance diffusion equations describe the diffusion coefficients. MESA implements mixing using the common approach of treating the chemical composition $D$ and angular momentum $\nu$ transport in a diffusion approximation (e.g., Potter et al., 2012). We calculate $D$ and $\nu$ following Choi et al. (2016) with the additional inclusion ST instabilities as implemented in Roy et al. (2020) for high mass stellar evolution. For further details of mixing mechanisms and angular momentum transport in MIST, we refer readers to Choi et al. (2016). ### 2.4 Mass loss Mass loss is one of the dominant uncertainties in evolutionary models of massive stars (Smith, 2014). Our treatment of mass loss via stellar winds is based on the “Dutch” mass loss recipe that is standard in MESA. We adopt the Vink et al. (2001) mass loss prescription for metallicity-dependent winds in hot $T>10^{4}$ K stars. Mass loss is enhanced by rotation (Section 2.5) and our prescriptions for mass loss matches the stellar wind recipe of Choi et al. (2016) for MIST. As discussed in Roy et al. (2020), varying the mass-loss rate by a constant factor has minimal impact on the He surface fraction for rapidly rotating stars. This is due to rotational mixing providing a mechanism to bring He to the surface that is independent of mass-loss. On the other hand, for non- rotating stars, varying the mass-loss rate by a constant factor has a non- negligible impact on the enhancement of surface abundances. Roy et al. (2020) found that tripling the mass-loss rate for a 100 M⊙ star allows surface He enrichment to occur for metallicites as low as [Fe/H] = $-1$. Reducing the mass-loss rate by a factor of 3 prevents the surface He mass fraction from ever reaching above 30%, eliminating the WR evolutionary phase, even at solar metallicity. The main effect of increasing or decreasing the mass-loss rate thus changes the maximum amount of surface He enrichment, with a larger impact on non-rotating stars. ### 2.5 Model grid We calculate extensive grids of stellar evolutionary tracks that cover a wide range in stellar mass, rotation, and metallicities as follows: Stellar mass: The stellar mass of evolutionary tracks ranges from 10 to 300 M⊙ following the same spacing as MIST for a total of 55 models in our mass range: $\Delta M=1M_{\odot}$ in the range 10–20 M⊙, $\Delta M=2M_{\odot}$ in the range 22–40 M⊙, $\Delta M=5M_{\odot}$ in the range 45–150 M⊙, and $\Delta M=25M_{\odot}$ in the range 175–300 M⊙. We choose these masses to provide sufficient coverage of the range of masses that dominate the photoionization budget for ease of input into atmosphere and nebular modeling. The models are evolved through the end of carbon burning and are stopped when the central 12C abundance drops to $10^{-4}$. Stellar abundance: We calculate grids with metallicity values from [Fe /H] = $-$2.0 to +0.5, with 0.1 dex spacing. We calculate Galactic Concordance non- solar-scaled abundance (Nicholls et al., 2017) grids where the relative fraction of each elemental abundance as a function of metallicity is calculated according to Eqn. 2.1. We also calculate protosolar abundance (Asplund et al., 2009) grids as implemented in MIST. Following Choi et al. (2016), we calculate the initial helium abundance adopting a scaling of $\Delta$Y/$\Delta$Z = 1.5 with the primordial helium abundance Yp = 0.249. Once Y is computed for a value of Z, we calculate X as X = 1 – Y – Z. Stellar rotation: Rotation can significantly alter the evolution of massive stars (Heger et al., 2000, 2005) and is particularly important for our massive stellar models. We compute models both with and without rotation. Following the prescription described Choi et al. (2016), we initialize our rotating stars to begin with solid body rotation at the Zero Age Main-Sequence (ZAMS). We use $v/v_{\rm crit}$ values of 0, 0.2, and 0.4, where $v/v_{\rm crit}$ is the critical surface linear velocity, defined at the equator of the star as: $v^{2}_{\rm crit}=\left(1-\frac{L}{L_{\rm Edd}}\right)\frac{GM}{R^{3}},$ (3) where the Eddington luminosity $L_{\rm Edd}$ is: $L_{\rm Edd}=\frac{4\pi GMc}{\kappa},$ (4) for a star with mass $M$, radius $R$, luminosity $L$, and opacity $\kappa$, and speed of light $c$. We adopt a fiducial rotation rate $v/v_{\rm crit}$ = 0.4 wherever the rotation rate is not explicitly mentioned as this value is frequently used as a standard rotating rate for models of massive stellar evolution and is supported by theoretical models of massive star formation independent of metallicity (Rosen et al., 2012). We do not implement rotation rates faster than $v/v_{\rm crit}$ = 0.4 as Roy et al. (2020) finds qualitatively similar results for all models with $v/v_{\rm crit}$ $>$ 0.4. The results of our grid of stellar evolution models are described in the following section. ## 3 Results Our primary goal is to produce extensive grids of stellar evolutionary tracks for massive stars that cover a wide range in stellar masses, ages, evolutionary phases, and metallicities at non-solar scaled abundances for ease of comparison to solar scaled abundance stellar tracks. Our high mass models are terminated at the end of core carbon burning (C-burn) stage, the point at which the central 12C mass fraction drops below $10^{-4}$. ### 3.1 Stellar Tracks Figure 1 shows MIST (solar) and Galactic Concordance evolutionary tracks at [Fe/H] = 0.0 and $v/v_{\rm crit}$ = 0.4. The differences in the tracks are relatively minor. More massive stars ($>150$ M⊙) with solar abundances tend to burn hotter and more luminous than the galactic concordance abundance tracks due to having a higher metal content. The change in the solar and non-solar scaled abundance tracks becomes more noticeable with increasing deviation at sub solar metallicities, a direct result of the non-uniform scaling of abundances with metallcities in the Stromlo tracks. This implies that the inferred ionizing spectra from these stars will have an increasingly important impact on the interpretation of HII region emission line spectra at metallicities less than solar. Figure 1: [Fe/H] = 0.0 (left), [Fe/H] = $-$1.0 (middle), and [Fe/H] = $-$2.0 (right) grid of stellar evolutionary tracks over a wide range of stellar masses computed with Asplund et al. (2009) solar abundances (colored dotted lines) and Stromlo Galactic Concordance (Gal Con) abundances (solid pink lines) at different stellar masses. The stellar tracks for Galactic Concordance and Solar abundances are by design the same at [Fe/H] = 0.0 (left) and the tracks begin to deviate with decreasing metallicities due to non- uniform scaling of abundances with metallcities. Figure 2 shows the impact of rotation on the Galactic Concordance and solar stellar evolutionary tracks at [Fe/H] = 0.0, [Fe/H] = $-$1.0, and [Fe/H] = $-$2.0. The rotating models tend to be hotter and more luminous overall than the non-rotating star models as a result of the reduced mean opacity in rotating stars. The same effect in solar scaled MIST tracks was also reported in Choi et al. (2017). The effect of rotation on the stellar evolutionary tracks has the largest impact at the lowest metallicities ([Fe/H] = $-$2.0 in Figure 2). Overall, rotation in stars has a larger impact than abundance ratio changes in the stellar tracks (Figure 1), with the differences most noticeable at the lowest metallicities ([Fe/H] = $-$2.0) as well. Figure 2: [Fe/H] = 0.0 (left), [Fe/H] = $-$1.0 (middle), and [Fe/H] = $-$2.0 (right) grid of Stromlo stellar evolutionary tracks showing the effect of rotation on the evolutionary tracks computed with Galactic Concordance abundances at different stellar masses. Models with rotation are shown in solid lines with $v/v_{\rm crit}$ = 0.4 and non-rotating $v/v_{\rm crit}$ = 0.0 models are shown in dashed lines. Rotation plays a significant role with abundances (Figure 1) in determining $T_{\rm eff}$ and luminosity. Figures 3 and 4 show the stellar surface gravity $g$ as a function of the effective temperature $T_{\rm eff}$ and compares the impact of abundances and stellar rotation. Similar to a star’s luminosity(Figure 1), we find that for stars with metallicities [Fe/H]$\gtrsim-1$, the galactic concordance abundance patterns have an equally important role as rotation (Figure 4) in determining the intrinsic stellar properties of $T_{\rm eff}$ and surface gravity $g$. For stars at low metalicities ([Fe/H] $=-2$), rotation has a larger impact in determining the stellar properties (e.g., $T_{\rm eff}$, surface gravity $g$, luminosity) than the correct abundance patterns. Figure 3: [Fe/H] = 0.0 (left), [Fe/H] = $-$1.0 (middle), and [Fe/H] = $-$2.0 (right) grid of stellar evolutionary tracks showing the surface gravity as a function of effective temperatures over a range of stellar masses computed with Asplund et al. (2009) solar abundances (A09; colored dotted lines) and Stromlo Galactic Concordance abundances (Gal Con; solid pink lines) at different stellar masses. The stellar tracks for Galactic Concordance and Solar abundances are similar at Solar metallicity and start to deviate and increase in their relative importance with decreasing metallicities. Figure 4: [Fe/H] = 0.0 (left), [Fe/H] = $-$1.0 (middle), and [Fe/H] = $-$2.0 (right) grid of Stromlo stellar evolutionary tracks displaying the effect of rotation on the surface gravity as a function of effective temperatures for stellar tracks computed with Galactic Concordance abundances at different stellar masses. Models with rotation are shown in solid lines with $v/v_{\rm crit}$ = 0.4 and non-rotating $v/v_{\rm crit}$ = 0.0 models are shown in dashed lines. Rotation starts to play a more significant role than abundances alone (Figure 3) determinations in determining the stellar properties at low ([Fe/H] $<-$1.0) metallicities. ### 3.2 Rotational Mixing In the previous section, we examined the impact that rotation plays in determining $T_{\rm eff}$ and luminosity, which appears to be as important as the chemical abundances. $T_{\rm eff}$ and luminosity are closely connected to mass-loss and surface abundances of different elements and we demonstrate here the measured surface abundance of our massive stars and its relation with rotational mixing. Figure 5 shows the surface 4He mass fraction as a function of time for a sample of initial stellar masses, metallicities ([Fe/H] = 0.0, $-$1.0, and $-$2.0), and three rotation rates ($v/v_{\rm crit}$= 0.4, 0.2, and 0.0). Figure 5 demonstrates that rotation in stars more massive than $\sim 100$ M⊙ heavily impacts the surface composition; massive, rotating stars spend the vast majority of their lives on the main sequence with enhanced He surface abundances consistent with Wolfe Rayet stellar evolutionary phase (e.g, Abbott & Conti, 1987; Crowther et al., 1995; Meynet & Maeder, 2003, 2005; Roy et al., 2020). Rotational mixing and the enhancement of the surface He begins very fast; the surface He abundance rises to over 40% just after $\sim$2 Myr for rotating $v/v_{\rm crit}$= 0.4 stars of $\gtrsim$100 M⊙ at metallicity [Fe/H] = 0.0. The two convective zones (inner core and outer core) of rotating stars are connected by three dominant rotational transport mechanisms: meridional circulation (ES circulation), GSF instability, and Spruit dynamo mixing (Roy et al., 2020). For non-rotating massive stars, there are no diffusion mechanisms for the transport of chemical elements from the inner convective core to the outer convective shell, and therefore there is no surface He enhancement. Rotational mixing enhances the surface composition of the massive stars with the effect increasing with decreasing metallicity. Figure 6 shows the enhancement of the surface mass fractions 14N normalized by the initial 14Ninitial abundance, as a function of time and rotation rate for a range of initial stellar masses and metallicities. Figure 6 and Figure 5 demonstrate that the surface mass fractions of helium and nitrogen are enhanced at low metallicity regardless of the rotation and enhanced at high rotation regardless of the metallicity. The enhancement is more prominent for more massive stars. At [Fe/H] = 0.0, our Galactic Concordance stars show a maximum nitrogen surface enhancement of $\sim 14$. For the low metallicity stars [Fe/H] $<$–1.0, the nitrogen enhancement is a factor of $\sim$30–32 relative to their initial 14N abundances. The effect of rotational mixing on the observed surface abundance chemical elements have been noted before (Heger et al., 2000; Meynet & Maeder, 2005; Crowther, 2007) and will impact the observed surface opacity-age relationships. This in turn will impact the modeled ionizing photon spectrum output compared to stars with non-enhanced He, N surface compositions (e.g., Choi et al., 2017; Roy et al., 2020). Figure 5: Time evolution of the surface 4He abundances for 60, 100, and 200 M⊙ stars with $v/v_{\rm crit}$ = 0.4 (solid lines), 0.2 (dash lines), and 0.0 (dot lines) at metallicities of [Fe/H] = 0.0 (left), [Fe/H] = $-$1.0 (middle), and [Fe/H] = $-$2.0 (right) for the Stromlo tracks. The gray line at 0.4 marks the He abundance that delineates the start of the WR phase of stellar evolution (Meynet & Maeder, 2005). Figure 6: Time evolution of the surface 14N abundances normalizes by the initial 14Ninitial abundance for 60, 100, and 200 M⊙ stars with $v/v_{\rm crit}$ = 0.4 (solid lines), 0.2 (dash lines), and 0.0 (dot lines) at metallicities of [Fe/H] = 0.0 (left), [Fe/H] = $-$1.0 (middle), and [Fe/H] = $-$2.0 (right) for Stromlo tracks. The initial 14Ninitial abundances are $6.11\times 10^{-4}$, $8.70\times 10^{-5}$, and $4.02\times 10^{-6}$ for [Fe/H] = 0.0, [Fe/H] = $-$1.0, and [Fe/H] = $-$2.0, respectively. ### 3.3 Main sequence lifetimes The enhanced mixing in the cores of rotating stars responsible for increasing their brightness also extends the main sequence lifetimes of the stars. Figure 7 shows the main sequence lifetime–initial mass relation. As expected, the main sequence lifetime is longer for rotating stars due to rotational mixing channeling additional fuel into the core of the star. The main sequence lifetimes for the Stromlo rotating models and MIST agree to within 10% at solar metallicity, though the main sequence lifetimes are shorter for the Stromlo tracks at solar metallicity. Our rotating Galactic Concordance tracks at solar metallicity do not show the non-monotonic behavior at 80 M⊙ with prolonged main sequence lifetimes as seen in the $v/v_{\rm crit}$ = 0.4, [Fe/H] = 0.0 metallicity MIST tracks. For all rotating stars, the boost to the main sequence lifetime at a fixed initial mass is marginally larger for the MIST tracks than for the Stromlo tracks, suggesting that rotational mixing may be marginally more efficient in solar-scaled stars. Figure 7: Main sequence lifetimes as a function of initial mass for two different values of initial rotation rates ($v/v_{\rm crit}$ = 0.4 in solid lines and $v/v_{\rm crit}$ = 0.0 in dotted lines) at [Fe/H] = 0.0 (left), [Fe/H] = $-$1.0 (middle), and [Fe/H] = $-$2.0 (right). Our new Stromlo tracks with Galactic Concordance (Gal Con) abundances are shown in pink and the MIST tracks (Choi et al., 2016) are shown in green. At a fixed initial mass, higher rotation rates lengthen the main sequence lifetime due to more efficient rotational mixing. The jagged nature of the lines for decreasing metallicity highlights convergence issues for models that do not run to completion. ### 3.4 Stellar Isochrones We note that in any grid, there are subsets of models that do not run to completion due to convergence issues (Choi et al., 2017, Figure 7). In general this is not an issue because the mass sampling is sufficiently fine that there are enough models to smoothly interpolate and construct smooth isochrones with the tracks that are available using the same mass grid as MIST (Dotter, 2016). We do note that at low metallicity, the mass grid sampling as laid out by MIST is not always sufficient to represent the fast evolutionary phases at early times, however, we do not model ancient, metal-poor populations in this current paper ([Fe/H]=$-4$) as is done by Choi et al. (2017) due to computational and convergence difficulties at these very low metallicities. The isochrones for our Stromlo stellar tracks are computed at three rotation rates $v/v_{\rm crit}$ = 0.0, 0.2, 0.4 and at each metallicity point in our grid. Because we only compute the stellar evolutionary tracks from 10–300 M⊙, our isochrones are only valid for stars $\lesssim$25 Myr old. The Stromlo and solar stellar evolutionary tracks are processed into isochrones following the procedure outlined in Dotter (2016). Figures 8 and 9 show 1, 3, 5, and 10 Myr isochrones at [Fe/H] = 0.0 and [Fe/H] = $-$1.0 and at varying rotations. The effect of faster rotation resulting in hotter, brighter, and longer-lived stars is immediately clear. The Stromlo tracks with Galactic Concordance abundances show the same fast appearance of Wolf-Rayet (WR) stars (T${}_{\rm eff}>10^{4}$ K and surface hydrogen mass fraction $X<0.3$; Meynet & Maeder, 2003; Georgy et al., 2012) from the massive star progenitors between 3–5 Myr (Figure 5), also observed in the massive rotating stars with solar abundances (Choi et al., 2017). Figure 8: [Fe/H] = 0.0 (left), [Fe/H] = $-$1.0 (middle), and [Fe/H] = $-$2.0 (right) isochrones for solar (A09; dotted colored lines) and Stromlo Galactic Conordance (Gal Con; pink solid lines) abundances as a function of age for rotation rate $v/v_{\rm crit}$ = 0.4. The four colors correspond to ages 1, 3, 5, and 10 Myr, respectively. At solar metallicity, solar abundance and Galactic Concordance tracks show similar results. At very low metallicities [Fe/H] = $-$2.0, Galactic Concordance tracks are in general more luminous. Figure 9: [Fe/H] = 0.0 (left), [Fe/H] = $-$1.0 (middle), and [Fe/H] = $-$2.0 (right) Stromlo isochrones as a function of age and rotation. The four colors correspond to ages 1, 3, 5, and 10 Myr, respectively. The linestyle refers to the rotation of the star. Fast rotation generally leads to hotter, brighter, and longer-lived stars. The rotation of the star is significant in determining the luminosity and $T_{\rm eff}$ as the initial abundances of the stars (Figure 8). ## 4 Discussion In this section we discuss the implications of rotating, massive star models with non-Solar elemental abundances on the interpretation of high-redshift star-forming galaxies. We examine applications for physical conditions at high and low redsihft where these hot, rotating star models will have a dramatic impact. Abundance scaling of elements at metallicities lower than the Solar standard has been explored in stars, but in the nebular modelling community, only simple uniform scaling assumptions are typically implemented. The surface enhancement of elements in massive rotating stars will have broad implication for the ionizing spectra of high-redshift, low-metallicity galaxies. Under the assumption that stars retain their original surface composition until they leave the main sequence, there is minimal impact on the hydrogen-ionizing photon fluxes, and thus the effect of this assumption on the total ionizing photon budget is minimal. Moderate and rapidly rotating stars heavily impact the original surface composition of these massive stars and while the hydrogen ionizing flux remains relatively unchanged, the self-consistently evolved surface composition for fast rotating stars contains fewer photons at the helium ionizing edge (Roy et al., 2020). The different surface opacity-age relationships make significant contribution to the photon ionizing budget than would be expected if the surface composition were un-evolved. This increases the complexity in interpreting spectral line diagnostics and measuring reliable metallicity indicators. Consequently, stellar models in conjunction with photoionization models for nebulae with different metallicities need to take this variation into account in order to accurately model and predict the properties of galaxies across cosmic time. To investigate the impact on the ionizing spectrum of our stellar evolutionary tracks computed using non-Solar scaled abundances, we model the simple stellar populations (SSPs) of our stellar isochrones (Section 3.4) using the Flexible Stellar Population Synthesis package (FSPS; Conroy et al., 2009; Conroy & Gunn, 2010). We predict the simulated SED for a 106 M⊙ stellar population following an instantaneous burst of star formation and a fully sampled Kroupa (2001) IMF with limits of 0.08 – 300 M⊙. We implement the MILES empirical library as the primary stellar spectral library (Sanchez-Blazquez et al., 2006; Falcón-Barroso et al., 2011). We compute the time evolution of the ionizing photon luminosity $Q$ from 1 to 20 Myr for 106 M⊙ stellar population at [Fe/H] = $-$2.0, $-$1.0, and 0.0, shown in Figure 10 computed at $v/v_{\rm crit}$ = 0.4 for stellar models with Soloar and Galactic Concordance abundances. The models produce comparable hydrogen and helium-ionizing photon output rates at [Fe/H] = 0.0. The difference of elemental abundances on the resulting ionizing photon output begins to deviate more at lower metallicity with the impact occurring at the highest energies blueward of 228 Å. The Stromlo models predicts a soften spectrum than the Solar models, primarily due to differences in the stellar populations of the underlying stellar models (Figure 1). The Stromlo models cease to produce an appreciable amount of photons blueward of 228 Å (the wavelength of photons capable of doubly ionizing helium) beyond 5 Myr, 1 Myr shorter than what is seen for the Solar models. Single-star models rely exclusively on the most massive stars as the principal ionizing sources and the ionizing photon output decreases dramatically upon the disappearance of the most massive stars the first few Myr. It is important to stress that the only changes made to the stellar tracks to compute the ionizing luminosity $Q$ in Figure 10 is the relative elemental abundances from Solar to Galactic Concordance for the Stromlo tracks. The differences in the the singly ionizing helium He i (24.6 eV) ionizing flux indicate this change to the tracks could affect important nebula lines compared to tracks computed with solar abundances. There are a large number of key nebular lines between the energy necessary to singly ionize helium (24.6 eV) and the energy required to doubly ionize helium (54.4 eV; Figure 11). This indicates that small changes in the stellar tracks between singly and doubly ionized helium will potentially have a large impact in the nebula ionization and subsequent nebula lines important for local metallicity and star formation rates as well as predictions for photoionization calculations in the early universe. Figure 12 shows the impact of stellar rotation of the Stromlo models on the ionizing photon output. The rotating and non-rotating stars show broad agreement with each other; discrepancies between the (non)rotating models are only significant at the highest energies shortward of 228 Å. The minimal impact of the stellar rotation on the hydrogen ionizing luminosity compared to the helium ionizing luminosity has been demonstrated in prior studies (Levesque et al., 2012; Choi et al., 2017). The galactic concordance abundance patterns have an equally important role as rotation in determining the ionizing photon output from the stellar properties, especially at lower metallicities; adopting stellar models that are not solar-scaled are critical ingredients in modeled SEDS and the interpretation of observations using these models. Considering non-Solar scaled abundances in stellar models is critical for accurate measurements of the ionizing photon budget from star clusters in local studies as well as measurements for the escape fraction of ionizing photons in high-redshift galaxies during cosmic reionization. Figure 10: Time evolution of the ionizing photon luminosity $Q$ for the Stromlo models (Gal Con; pink) and models assuming Solar abundances (black) for a 106 M⊙ stellar population at [Fe/H] = 0.0 (left), $-$1.0 (middle), and $-$2.0 (right). The linestyle represent the ionizing photons capable of ionizing hydrogen H i (912 Å; solid lines), singly ionizing helium He i (504 Å; long dashed lines), and doubly ionizing helium He ii (228 Å; short dotted lines). The ionizing photon output for solar versus galactic concordance abundances show broad agreement with the galactic concordance stars showing a softer spectrum with the differences becoming more pronounced at lower metallicities. Figure 11: The spectral energy distribution for FSPS simulations of star clusters with ages of 1$-$8 Myr (different colored lines) computed with a continuous star formation. The dotted black lines show the energy (wavelength on top axis) for singly ionized hydrogen H i, singly ionized helium He i, doubly ionized oxygen O ii, and doubly ionized helium He ii. The stellar spectra have been normalized to $\log L_{\lambda}=40$ erg/s/Å at 912 Å. The bottom panel shows select ions and the corresponding energy bands where gray is neutral, red is the region where the ions are singly ionised, orange is doubly ionized, yellow is triply ionized, and green is fourthly ionized. Figure 12: Time evolution of the ionizing photon luminosity $Q$ for the Stromlo models for a 106 M⊙ stellar population at [Fe/H] = 0.0 (left), $-$1.0 (middle), and $-$2.0 (right) for two different values of initial rotation rates ($v/v_{\rm crit}$ = 0.4 in pink lines and $v/v_{\rm crit}$ = 0.0 in dark purple lines). The linestyle represent the ionizing photons capable of ionizing hydrogen H i (912 Å; solid lines), singly ionizing helium He i (504 Å; long dashed lines), and doubly ionizing helium He ii (228 Å; short dotted lines). Non-rotating stars show a softer spectrum than rotating stars and the differences become more pronounced at lower metallicities and is most significant at the highest energies shortward of 228 Å. Constraining the impact of non-Solar scale elemental abundance self- consistently with atmosphere and nebular modeling and its importance for spectral line diagnostics await a full spectral synthesis calculation in future work. This paper is the first in a series to approach this problem in a self-consistent manner and additionally requires a large library of atmospheres covering the full range of physical properties and atmospheric compositions. The atmospheric library and and their application to complete nebular radiative transfer modeling will be investigated in the future (R. Sutherland et al., in prep). A future paper will investigate the variable non- Solar elemental abundances stellar tracks and how these abundance composition changes affect the modeled spectrum and quantify the impact on the ionizing photon budget and spectral line diagnostics (K. Grasha et al., in prep) and the implication of elemental abundances of massive star models for cosmic reionization and the interpretation of high-redshift star-forming galaxies. ## 5 Summary and Conclusions In this paper we investigate the impact of the stellar elemental abundances on the stellar evolutionary tracks in massive rotating and non-rotating stars over a wide range of metallicities using the Galactic Concordance abundances from resolved Milky Way H ii regions. We use MESA for our stellar evolutionary calculations and include all the same physical processes and parameters values as used in the MIST track library (Choi et al., 2016) for uniform scaled solar-abundances. We focus on massive stars evolved until the end of the main- sequence phases and adopt the same the modifications to improve the treatment of massive stars and the implementation of Galactic Concordance abundances as outlined by Roy et al. (2020). We summarize our main conclusions below. 1. 1. The assumed elemental abundance ratios have minor influence on massive stellar evolutionary tracks at solar metallicities (Figure 1). The correct implementation of non-uniformly scaled elemental abundance ratios becomes more significant at sub-solar metallicities, where the differences between Stromlo stars and MIST uniform solar-scaled abundance stars are primarily most pronounced for stars more massive than 50 M⊙at low metallicities. 2. 2. Rotation and abundances both play a significant role in determining the stellar parameters of $T_{\rm eff}$, luminosity, and surface gravity $g$ (Figure 2, Figure 3). Rotation effects are more significant at lower metallicities in determining $T_{\rm eff}$, luminosity, and surface gravity $g$ than abundance patterns as the stars become more compact and angular momentum loss due to winds becomes less important. 3. 3. The significant effect of rotation in determining the stellar $T_{\rm eff}$, luminosity and surface gravity $g$, especially at low metallicities, has a large impact on mass-loss and surface abundances of different elements (Figures 5 and 6). Rapidly rotating stars ($v/v_{\rm crit}$= 0.4) more massive than 100 M⊙ show helium surface abundance enhancement to 40% by $\sim 2$ Myr. Even in non-rotating stars, toward the end of the core H depletion, stars experience the ‘classical’ WR phase, approaching the Eddington limit and experiencing rapid mass-loss, which enhances the He surface abundance upwards to 90%. At lower metallicities ([Fe/H] = $-2$), the effect of rotation on He surface abundance enhancement becomes more significant with He surface enhancement occurring as rapidly as 1.5 Myr for 100 M⊙ stars. Rotation plays an extremely important role in the surface enhancement of nitrogen, with the effect becoming more important at lower metallicities. 4. 4. Rotation lengthens the main sequence timescale of massive stars (Figure 7) and leads to brighter and hotter stars (Figure 9). The boost in the main sequence lifetime at a fixed mass is slight larger in our Galactic Concordance Stromlo tracks than in the MIST solar-scaled models in both rotating and non-rotating stars at low metallicities ([Fe/H] = $-2$; Figure 8), a result of increased metallicity and a decrease in the mean opacity. 5. 5. The Stromlo tracks show a softer ionizing spectrum compared to expectations from the Solar-scaled ionizing spectrum (Figure 10). The ionizing photon luminosity between Solar and Galactic Concordance models deviate most significantly at low metallicities. The stellar populations in low metallicity environments, common at high redshift, only require moderate rotation rates to produce significant ionizing photons, decreasing rapidly after the disappearance of the most massive stars after a few Myr. This paper is the first implementation of Galactic Concordance abundances to the stellar evolution models that does not implement solar, scaled-solar, or alpha-element enhanced abundances. These models will be applicable for extreme regions of star formation, especially low metallicity systems and active star- forming galaxies, where massive and rotating star models at non-uniform scaled metallicitices have the potential to heavily impact the resulting properties of the star-forming emission line regions. The importance of rotating, massive star models and their elemental abundance scaling have broad implications within the context of cosmic reionization and the interpretation of high- redshift star-forming galaxies. The Galactic Concordance scale is not the sole scaling parameter that can be used for non-uniform scaling relations between elements and metallicity. It is vital, however, that a single scale is implemented for all the necessary components involved within stellar population synthesis and photoionization models. In the future, we will investigate the effect of Galactic Concordance scaled abundances of massive, rotating stars in low metallicity environments using stellar population synthesis applications using self-consistent atmosphere and nebular modeling with MAPPINGS to quantify the impact to the ionizing photon budget and spectral line diagnostics. We are grateful for the valuable comments on this work by an anonymous referee that improved the scientific outcome and quality of the paper. This research was conducted on Ngunnawal Indigenous land. KG gratefully acknowledges the support of Lisa Kewley’s ARC Laureate Fellowship (FL150100113). AR acknowledges the usage of Australian National University RSAA cluster AVATAR and NCI GADI via project jh2 for the implementation of the Galactic Concordance (and also any arbitrary) abundance setups for this work. Also, AR gratefully acknowledges the support of Mark Krumholz’s Discovery Project (DP160100695) and Future Fellowship (FT180100375) award grants. This research has made use of NASA’s Astrophysics Data System Bibliographic Services. This research made use of Astropy,555http://www.astropy.org a community-developed core Python package for Astronomy (Astropy Collaboration et al., 2013, 2018). Parts of this research were supported by the Australian Research Council Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D), through project number CE170100013. The authors thank the invaluable labor of the maintenance and clerical staff at their institutions, whose contributions make scientific discoveries a reality. ## References * Abbott & Conti (1987) Abbott, D. C., & Conti, P. S. 1987, ARA&A, 25, 113 * Anders & Grevesse (1989) Anders, E., & Grevesse, N. 1989, Geochim. Cosmochim. Acta, 53, 197 * Asplund et al. (2009) Asplund, M., Grevesse, N., Sauval, A. J., & Scott, P. 2009, ARA&A, 47, 481 * Astropy Collaboration et al. (2013) Astropy Collaboration, Robitaille, T. P., Tollerud, E. J., et al. 2013, A&A, 558, 33 * Astropy Collaboration et al. (2018) Astropy Collaboration, Price-Whelan, A. M., Sipőcz, B. M., et al. 2018, AJ, 156, 123 * Cazorla et al. (2017) Cazorla, C., Nazé, Y., Morel, T., et al. 2017, arXiv, 123 * Choi et al. (2017) Choi, J., Conroy, C., & Byler, N. 2017, ApJ, 838, 159 * Choi et al. (2016) Choi, J., Dotter, A., Conroy, C., et al. 2016, ApJ, 823, 102 * Conroy & Gunn (2010) Conroy, C., & Gunn, J. E. 2010, ApJ, 712, 833 * Conroy et al. (2009) Conroy, C., Gunn, J. E., & White, M. 2009, ApJ, 699, 486 * Crowther (2007) Crowther, P. A. 2007, ARA&A, 45, 177 * Crowther et al. (1995) Crowther, P. A., Smith, L. J., Hillier, D. J., & Schmutz, W. 1995, A&A, 293, 427 * Demarque et al. (2004) Demarque, P., Woo, J., Kim, Y., & Yi, S. K. 2004, ApJS, 155, 667 * Denicoló et al. (2002) Denicoló, G., Terlevich, R., & Terlevich, E. 2002, MNRAS, 330, 69 * Dopita et al. (2016) Dopita, M. A., Kewley, L. J., Sutherland, R. S., & Nicholls, D. C. 2016, Astrophys. Space Sci., 361, 1 * Dotter (2016) Dotter, A. 2016, ApJS, 222, 8 * Ekström et al. (2012) Ekström, S., Georgy, C., Eggenberger, P., et al. 2012, A&A, 537, A146 * Falcón-Barroso et al. (2011) Falcón-Barroso, J., Sánchez-Blázquez, P., Vazdekis, A., et al. 2011, A&A, 532, A95 * Georgy et al. (2012) Georgy, C., Ekström, S., Meynet, G., et al. 2012, A&A, 542, A29 * Girardi et al. (2004) Girardi, L., Grebel, E. K., Odenkirchen, M., & Chiosi, C. 2004, A&A, 422, 205 * Harris et al. (2020) Harris, C. R., Millman, K. J., van der Walt, S. J., et al. 2020, Nature, 585, 357 * Heger et al. (2000) Heger, A., Langer, N., & Woosley, S. E. 2000, ApJ, 528, 368 * Heger et al. (2005) Heger, A., Woosley, S. E., & Spruit, H. C. 2005, ApJ, 626, 350 * Hidalgo et al. (2018) Hidalgo, S. L., Pietrinferni, A., Cassisi, S., et al. 2018, ApJ, 856, 125 * Hopkins et al. (2012) Hopkins, P. F., Quataert, E., & Murray, N. 2012, MNRAS, 421, 3488 * Hunter (2007) Hunter, J. D. 2007, Comput. Sci. Eng., 9, 90 * Iglesias & Rogers (1993) Iglesias, C. A., & Rogers, F. J. 1993, ApJ, 412, 752 * Iglesias & Rogers (1996) —. 1996, ApJ, 464, 943 * Jones et al. (2001) Jones, E., Oliphant, T., & Peterson, P. 2001, SciPy Open source Sci. tools Python * Kewley & Dopita (2002) Kewley, L. J., & Dopita, M. A. 2002, ApJS, 142, 35 * Kewley et al. (2004) Kewley, L. J., Geller, M. J., & Jansen, R. A. 2004, AJ, 127, 2002 * Kewley et al. (2019a) Kewley, L. J., Nicholls, D. C., & Sutherland, R. S. 2019a, ARA&A, 57, 511 * Kewley et al. (2019b) Kewley, L. J., Nicholls, D. C., Sutherland, R. S., et al. 2019b, ApJ, 880, 16 * Kroupa (2001) Kroupa, P. 2001, MNRAS, 322, 231 * Krumholz et al. (2012) Krumholz, M. R., Dekel, A., & Mckee, C. F. 2012, ApJ, 745, 69 * Levesque et al. (2012) Levesque, E. M., Leitherer, C., Ekstrom, S., Meynet, G., & Schaerer, D. 2012, ApJ, 751, 67 * Mckee & Ostriker (2007) Mckee, C. F., & Ostriker, E. C. 2007, ARA&A, 45, 565 * Meynet & Maeder (2003) Meynet, G., & Maeder, A. 2003, A&A, 404, 975 * Meynet & Maeder (2005) —. 2005, A&A, 429, 581 * Morel (2009) Morel, T. 2009, Commun. Asteroseismol., 158, 122 * Nicholls et al. (2017) Nicholls, D. C., Sutherland, R. S., Dopita, M. A., Kewley, L. J., & Groves, B. A. 2017, MNRAS, 466, 4403 * Nieva & Przybilla (2012) Nieva, M. F., & Przybilla, N. 2012, A&A, 539, A143 * Paxton et al. (2011) Paxton, B., Bildsten, L., Dotter, A., et al. 2011, ApJS, 192, 3 * Paxton et al. (2013) Paxton, B., Cantiello, M., Arras, P., et al. 2013, ApJS, 208, 4 * Paxton et al. (2015) Paxton, B., Marchant, P., Schwab, J., et al. 2015, ApJS, 220, 15 * Paxton et al. (2018) Paxton, B., Schwab, J., Bauer, E. B., et al. 2018, ApJS, 234, 34 * Paxton et al. (2019) Paxton, B., Smolec, R., Schwab, J., et al. 2019, ApJS, 243, 10 * Pérez & Granger (2007) Pérez, F., & Granger, B. E. 2007, Comput. Sci. Eng., 9, 21 * Pettini & Pagel (2004) Pettini, M., & Pagel, B. E. J. 2004, MNRAS, 348, L59 * Pietrinferni et al. (2004) Pietrinferni, A., Cassisi, S., Salaris, M., & Castelli, F. 2004, ApJ, 612, 168 * Potter et al. (2012) Potter, A. T., Tout, C. A., & Eldridge, J. J. 2012, MNRAS, 419, 748 * Przybilla (2008) Przybilla, N. 2008, Rev. Mod. Astron., 20, 323 * Rogers & Iglesias (1992) Rogers, F. J., & Iglesias, C. A. 1992, ApJS, 79, 507 * Rosen et al. (2012) Rosen, A. L., Krumholz, M. R., & Ramirez-Ruiz, E. 2012, ApJ, 748, 97 * Roy et al. (2020) Roy, A., Sutherland, R. S., Krumholz, M. R., Heger, A., & Dopita, M. A. 2020, MNRAS, 494, 3861 * Sanchez-Blazquez et al. (2006) Sanchez-Blazquez, P., Peletier, R. F., Jimenez-Vicente, J., et al. 2006, MNRAS, 371, 703 * Seaton (2005) Seaton, M. J. 2005, Mon. Not. R. Astron. Soc. Lett., 362, 1 * Smith (2014) Smith, N. 2014, ARA&A, 52, 487 * van der Walt et al. (2011) van der Walt, S., Colbert, S. C., & Varoquaux, G. 2011, Comput. Sci. Eng., 13, 22 * Vink et al. (2001) Vink, J. S., de Koter, A., & Lamers, H. J. G. L. M. 2001, A&A, 369, 574 * Wyse & Gilmore (1993) Wyse, R., & Gilmore, G. 1993, Astron. Soc. Pacific Conf. Ser., 48, 727 * Yi et al. (2001) Yi, S., Demarque, P., Kim, Y., et al. 2001, ApJS, 136, 417 * Yi et al. (2003) Yi, S. K., Kim, Y., & Demarque, P. 2003, ApJS, 144, 259
8k
arxiv_papers
2101.01200
# Modeling compact binary signals and instrumental glitches in gravitational wave data Katerina Chatziioannou Department of Physics, California Institute of Technology, Pasadena, California 91125, USA LIGO Laboratory, California Institute of Technology, Pasadena, CA 91125, USA Center for Computational Astrophysics, Flatiron Institute, 162 5th Ave, New York, NY 10010, USA Neil J. Cornish eXtreme Gravity Institute, Department of Physics, Montana State University, Bozeman, Montana 59717, USA Marcella Wijngaarden Center for Computational Astrophysics, Flatiron Institute, 162 5th Ave, New York, NY 10010, USA Mathematical Sciences and STAG Research Centre, University of Southampton, SO17 1BJ, Southampton, UK Tyson B. Littenberg NASA Marshall Space Flight Center, Huntsville, AL 35812, USA ###### Abstract Transient non-Gaussian noise in gravitational wave detectors, commonly referred to as glitches, pose challenges for detection and inference of the astrophysical properties of detected signals when the two are coincident in time. Current analyses aim toward modeling and subtracting the glitches from the data using a flexible, morphology-independent model in terms of sine- Gaussian wavelets before the signal source properties are inferred using templates for the compact binary signal. We present a new analysis of gravitational wave data that contain both a signal and glitches by simultaneously modeling the compact binary signal in terms of templates and the instrumental glitches using sine-Gaussian wavelets. The model for the glitches is generic and can thus be applied to a wide range of glitch morphologies without any special tuning. The simultaneous modeling of the astrophysical signal with templates allows us to efficiently separate the signal from the glitches, as we demonstrate using simulated signals injected around real O2 glitches in the two LIGO detectors. We show that our new proposed analysis can separate overlapping glitches and signals, estimate the compact binary parameters, and provide ready-to-use glitch-subtracted data for downstream inference analyses. ## I Introduction During the first half of their third observing run (O3a), the advanced ground based gravitational wave (GW) detectors LIGO Aasi _et al._ (2015) and Virgo Acernese _et al._ (2015) observed an astrophysical transient signal about every 5 days of data Abbott _et al._ (2020a). The large detection rate increases the chance of observing an event while one of the detectors experiences transient non Gaussian noise, also known as instrumental glitches. Indeed, this scenario has come to pass for one event from the second observing run (O2) Abbott _et al._ (2017) and 8 events from the first half of the third observing run Abbott _et al._ (2020a). Such coincidences are expected to become even more frequent in the coming years. Planned improvements in the detectors’ sensitivity will be directly reflected by an even larger rate of astrophysical discoveries Abbott _et al._ (2013). Moreover, O3a was characterized by an increase in the rate of glitch occurrence in the two LIGO detectors, a trend that might persist during the fourth observing run (O4) as the decreased average detector noise might help reveal weaker sources of transient noise. For example the rate of glitches in the LIGO Livingston detector increased from 0.2 per minute in O2 to 0.8 per minute in O3a Abbott _et al._ (2020a). The presence of a non Gaussian noise feature in the data, a glitch, poses challenges for nearly all inference analyses. GW inference is based on a model for the detector noise, expressed through the likelihood function. In the absence of glitches, detector noise is colored and Gaussian to a very good approximation Chatziioannou _et al._ (2019), with a spectrum that is described through the noise power spectral density (PSD). The above considerations give rise to a Gaussian likelihood function whose variance is the noise PSD, a choice that is almost ubiquitous Veitch _et al._ (2015); Abbott _et al._ (2020b). Different choices for estimating the PSD or treating its uncertainty can result in different functional forms for the likelihood, but they are all based on the assumption of colored Gaussian noise Rover _et al._ (2011); Talbot and Thrane (2020). Since instrumental glitches violate the basic assumptions of GW inference, they need to be effectively mitigated before the data are analyzed. One option is to remove the offending data all together Usman _et al._ (2016); Abbott _et al._ (2017); Sachdev _et al._ (2019); Zackay _et al._ (2019), which can be done quickly, allowing for low latency estimation of source parameters that enable followup observations Abbott _et al._ (2017). The downside of this approach is that part of the astrophysical signal is lost making it prohibitive for binary black hole (BBH) signals whose duration is comparable to the glitch duration. In order to avoid signal, and thus information loss, another option is to model the glitch and regress it from the data, leaving behind not only the astrophysical signal but also the Gaussian noise. This approach is the topic of the current study111An independent effort to mitigate the effect of broadband and/or nonstationary detector noise is based on information from auxiliary sensors DeRosa _et al._ (2012); Tiwari _et al._ (2015); Meadors _et al._ (2014); Driggers _et al._ (2019); Davis _et al._ (2019); Vajente _et al._ (2020); Ormiston _et al._ (2020). This approach does not remove entire data segments either and thus is not expected to lead to loss of information.. The wide variety of glitch morphologies, and even variations within a certain glitch type, make constructing exact models for glitches challenging Coughlin _et al._ (2019). A more flexible approach is based on BayesWave Cornish and Littenberg (2015); Cornish _et al._ (2020) which models various components of the GW data in a morphology-independent way. Non-Gaussian features in the data are modeled in terms of sums of sine-Gaussian wavelets whose number and parameters are marginalized over with a suite of Markov Chain Monte Carlo (MCMC) and Reversible Jump MCMC (RJMCMC) Green (1995) samplers. Coherent features (i.e. features that appear in all detectors in a manner consistent with an astrophysical signal originating from a specific sky location) are modeled by a single sum of wavelets that is projected onto the detector network; these features are interpreted as having an astrophysical origin. Incoherent features are instead modeled by independent sums of wavelets in each GW detector and are interpreted as instrumental glitches. The PSD of the Gaussian noise is also modeled in terms of splines and Lorentzians using an algorithm sometimes known as BayesLine Littenberg and Cornish (2015); Chatziioannou _et al._ (2019). BayesWave and BayesLine are fully integrated and we will refer to the combined analysis with the name BayesWave in this paper. Modeling instrumental glitches with BayesWave and subtracting them from the data in order to make ready-to-use data for downstream inference has been a standard step of LIGO/Virgo analyses since O2 Abbott _et al._ (2017, 2020a). The GW signal from the first binary neutron star (BNS) coalescence detection, GW170817, overlapped with a glitch in the LIGO Livingston detector approximately $1.1$s before coalescence Abbott _et al._ (2017). The glitch was modeled with BayesWave’s _glitch model_ in terms of a sum of wavelets and removed from the data, a procedure documented and released in BayesWave Glitch Subtraction for GW170817 . Despite the glitch overlapping with the actual astrophysical signal, the subtraction process was robust against inadvertently removing the signal together with the glitch. The reason is that the specific glitch was short in duration (less than a second) and extended in frequency, unlike the signal that lasted for about $2$ minutes in the detector sensitive band. As such, the sine-Gaussian wavelets that would fit the glitch and the signal are distinct in terms of their time-frequency features; the wavelets that model the glitch are short and hence do not model the long-lasting BNS signal. This procedure was further shown to not introduce biases in the astrophysical parameter inference of the underlying signal by analyzing simulated signals injected on instances of the same glitch type in LIGO Livingston data Pankow _et al._ (2018). Motivated by the success of this first attempt at glitch mitigation and in preparation for the increased detection rate of O3, BayesWave was extended to be able to simultaneously model both the signal and the glitch Cornish _et al._ (2020). Both signals and glitches are modeled with a sum of sine-Gaussian wavelets, the only difference being that the signal is coherent across the detectors in the network, while the glitch is not. The analysis effectively uses data from all detectors available to determine which part of the non Gaussian data are coherent (and would thus correspond to an astrophysical signal), and which part is incoherent (and would thus correspond to an instrumental glitch). The combined signal+glitch analysis was applied to one O3a detection Abbott _et al._ (2020a), enabling glitch mitigation even for data that contained short-duration BBH signals. The signal+glitch analysis models compact binary coalescence (CBC) signals in terms of wavelets, and is thus agnostic to the signal morphology. However, accurate models exist for CBCs in terms of solutions to the Einstein field equations that are routinely used both for detection and parameter estimation. In this paper we take another step toward efficient separation of CBCs and glitches by constructing an analysis that simultaneously models the CBC signal in terms of CBC templates and the glitch in terms of sine-Gaussian wavelets. Similar to the initial glitch-only analysis and the subsequent signal+glitch analysis, we also model and marginalize over the detector noise PSD. We test our analysis using public O2 data that contain common glitch types and simulated CBC signals. We demonstrate that we can efficiently separate the glitch from the CBC, estimate the CBC parameters, and provide ready-to-use glitch-subtracted data for downstream inference analyses. The rest of the paper is organized as follows. In Sec. II we describe the updates to the standard BayesWave algorithm in terms of the CBC analysis. In Sec. III we apply our analysis to simulated signal overlapping with known detector glitches from O2 data. In Sec. IV we analyze a selection of detected signals, namely GW170817 and GW150914. Finally, in Sec. V we conclude and point to future work. ## II General Algorithm Description The combined BayesWave algorithm is presented in detail in Cornish _et al._ (2020) and here we describe only the features relevant to our study. BayesWave simultaneously models signals, glitches, and Gaussian noise in GW data by means of different models. The _signal model_ describes astrophysical signals through a sum of Morlet Gabor wavelets that are coherent across the detector network. The number of wavelets and the parameters of each are marginalized over, as are the extrinsic parameters that determine how the signal is projected in each detector. The _glitch model_ describes instrumental glitches with an incoherent sum of Morlet Gabor wavelets whose number and parameters are again marginalized over. Glitch power in each detector is described by an independent sum of such wavelets. The _noise model_ describes the Gaussian noise PSD with a broadband spline model and sharp Lorentzians. As above, the number of spline points and Lorentzians as well as their parameters are marginalized over. In order to sample the multidimensional posterior density of all models, BayesWave uses a blocked Gibbs sampler that takes turns between sampling each model with completely independent MCMC or RJMCMC samplers. This includes (i) an RJMCMC that samples the signal and glitch wavelet parameters, (ii) an MCMC that samples the signal extrinsic parameters, and (iii) an RJMCMC that samples the splines and Lorentzians for the noise PSD. Each sampler in turn updates its parameters for a predetermined number of iterations, typically ${\cal{O}}(10^{2})$, while all other parameters are kept fixed. For example, the extrinsic sampler updates the extrinsic signal parameters while the wavelet parameters and noise PSD are kept constant. Once the predetermined number of updates has been reached, the extrinsic sampler returns its current parameters and the noise sampler begins updating the noise model while keeping the wavelet and extrinsic parameters fixed. This process of alternating sampling between different blocks of model parameters is repeated for ${\cal{O}}(10^{4})$ iterations. The construction of the algorithm in terms of a blocked Gibbs sampler makes adding further models and samplers straightforward. In the current version described in Cornish _et al._ (2020), the astrophysical signal is modeled with coherent sine-Gaussian wavelets that allow us to describe signals with a large level of flexibility. We extend BayesWave’s blocked Gibbs sampler by adding one more element, namely a model of the signal in terms of quasicircular CBC waveforms. In fashion with the existing implementation, the MCMC that samples the posterior distribution for the CBC parameters is completely independent from the remaining code samplers. The result is a flexible algorithm that can be used with any combination of CBC, signal222We retain the original model names in BayesWave, hence the _signal model_ refers to the wavelet signal model, while the _CBC model_ refers to the model in terms of CBC templates. Both models target astrophysical signals. Since we do not use the _signal model_ in the remainder of the paper, we trust that this will not lead to confusion., glitch, and noise models for the detector data. The CBC model is integrated with LALSimulation LIGO Scientific Collaboration, Virgo Collaboration (2018) and can operate with any nonprecessing model available there333Both the sampling and the jump proposals for the CBC parameters are constructed to expect the signal amplitude and phase from the waveform generator. There is therefore no fundamental limitation to non- precessing signals and we plan to extend our analysis to include the effect of spin-precession in the future.. The eleven parameters of a spin-aligned quasicircular CBC signal, namely the four intrinsic parameters (the two masses and spin magnitudes) and seven extrinsic parameters (the time of coalescence, the phase of coalescence, two sky location angles, the polarization angle and the inclination angle the distance), are updated in overlapping blocks. Common to both blocks is the phase of coalescence since BayesWave’s extrinsic sampler updates the overall phase of the signal as described in Cornish _et al._ (2020). The CBC MCMC sampler updates the four intrinsic parameters, the time of coalescence, the phase of coalescence, and the distance. The existing extrinsic sampler in BayesWave updates the two sky angles, the polarization angle, the inclination angle, and the phase of coalescence while holding all other parameters fixed. We use standard priors for all parameters: uniform over the detector-frame masses and spin magnitudes, uniform in time and phase, and uniform in luminosity volume. The CBC sampler is custom and not based on any existing samplers used in LIGO- Virgo parameter estimation. The CBC sampler is taken from the recently developed QuickCBC Cornish (2021) analysis pipeline. A closely related sampler Cornish and Shuman (2020) has been developed for analyzing data from the future Laser Interferometer Space Antenna. The CBC sampler is a replica exchange (parallel tempered) Markov Chain Monte Carlo (PTMCMC) algorithm that uses a mixture of proposal distributions. The default collection of proposals are: Gaussian jumps along eigenvectors of the Fisher information matrix, scaled by the reciprocal of the square root of the corresponding eigenvalue; differential evolution using a rolling history array at each temperature, updated every 10 iterations and holding 1000 past samples; and small, Gaussian jumps along each parameter direction. Each chain carries its own Fisher information matrix, which is updated periodically. The Fisher and differential evolution proposals are effective at exploring parameter correlations, while the small jumps prevent the chains from getting stuck in regions where the Fisher matrix becomes ill-conditioned. The CBC sampler is not optimized for blindly finding signals, so it is best to initialize the sampler with a good starting solution for the source parameters such as the output from a CBC search pipeline, or the injected parameters for a simulated signal. Alternatively the sampler can be initialized using a custom built CBC search algorithm from the QuickCBC Cornish (2021) analysis pipeline that has been incorporated into the BayesWave preprocessing steps. The search is broken into two stages, a rapid network-coherent search with analytic maximization over extrinsic parameters, followed by a fast MCMC over the extrinsic parameters using a likelihood function that precomputes the waveform inner products Cornish (2016). This procedure returns the starting point for all $11$ CBC parameters. More details about the initial search step and discussion of its robustness against instrumental glitches are presented in Cornish (2021). ## III Simulated Signals Figure 1: Spectrograms for the three glitches of different types studied here: blip glitch (left), scattered light (middle), blue mountain (right). The three types of glitches are characterized by very different time-frequency properties. Glitch | GPS time (s) | Detector | | Segment --- length (s) | Sampling --- rate (Hz) $f_{\textrm{low}}$(Hz) | $Q_{\textrm{max}}$ | CBC SNR Blip | 1168989748 | Hanford | 4 | 2048 | 16 | 40 | 15 Scattered light | 1172917779 | Livingston | 8 | 2048 | 8 | 160 | 15 Blue mountain | 1165069536 | Hanford | 16 | 2048 | 16 | 40 | 15 Table 1: Settings for the runs of Sec. III. From left to right, columns correspond to the type of glitch, the GPS time, the affected detector, the segment length, the sampling rate, to low frequency cut off, the maximum quality factor of the glitch wavelets, and the SNR of the injected signals. We test the efficacy of separating CBCs from glitches with our CBC+glitch model by selecting $3$ common glitch types from O2 data Gravitational Wave Open Science Center () (GWOSC); Abbott _et al._ (2019) that are known to have an adverse effect on searches for CBCs Abbott _et al._ (2020a). We then add simulated CBC signals consistent with a BBH with detector-frame masses of $36M_{\odot}$ and $29M_{\odot}$ and vanishing spin at different times with respect to the glitch. All simulated signals have a signal-to-noise (SNR) ratio of 15. We use the IMRPhenomD Husa _et al._ (2016); Khan _et al._ (2016) waveform model both for simulation and recovery as implemented in LALSimulation LIGO Scientific Collaboration, Virgo Collaboration (2018). We then analyze the data from the two LIGO detectors with our CBC+glitch+noise model, where the coherent signal is modeled by the CBC template, the glitch is modeled by incoherent wavelets, and the noise PSD is modeled with splines and Lorenzians. Spectrograms for the $3$ glitches are shown in Fig. 1: blip glitch (left), scattered light (middle), and blue mountain (right). Further details and run settings for each type of glitch are shown in Table 1. ### III.1 Glitch type 1: Blip Blip glitches are one of the most common glitch types for the two LIGO detectors. They are characterized by short duration, and hence pose a challenge for the detection of high mass BBH signals Cabero _et al._ (2019). Their origin is largely unknown. Figures 2-5 show our results for simulated signals injected at different times with respect to a blip glitch in the LIGO Hanford detector during O2. Details about the glitch, including its GPS time, and the run settings are presented in Table 1. A spectrogram of the data containing the glitch is given on the left panel of Fig. 1, where the short duration and large frequency extent are shown. The whitened data and reconstructions for the CBC signal and the glitch are shown in Fig. 2 where we plot the 90% credible intervals for each reconstruction in LIGO Hanford (top) and LIGO Livingston (bottom). The glitch is easily visible in LIGO Hanford as a short duration $\sim 15\sigma$ noise excursion. No glitch power is identified in LIGO Livingston at that time, but the CBC signal is clearly identified. This allows us to separate the corresponding coherent CBC signal in LIGO Hanford from the instrumental glitch, even when the latter overlaps with the merger phase of the signal (left panel). The glitch reconstruction is also consistent across the three simulated signals, suggesting that the glitch model is not fitting any part of the CBC signal. Source parameters for the simulated CBC are presented in Figs. 3 and 4 both for the CBC+glitch+noise analysis and a CBC+noise analysis for selected recovered parameters for the leftmost simulated CBC signal together with the injected values with black crosses or vertical lines as appropriate. Figure 3 shows the mass ratio $q$, the effective spin $\chi_{\textrm{eff}}$, and the detector frame chirp mass ${\cal{M}}$, while Fig. 4 shows the luminosity distance and the cosine of the inclination angle. In all cases the posterior distributions recovered under the CBC+glitch+noise model are consistent with the injected parameters, though the marginalized posteriors do not peak at the injected values, as expected from inference of signals in Gaussian noise. For reference, we show posteriors under the CBC+noise model in orange that assumes that the data are consisted of just a CBC signal and Gaussian noise, without any provision for a glitch. Since this assumption is violated by the presence of the blip glitch, the resulting posteriors are expected to be biased compared to the true parameters and the orange contours in Figs. 3 and 4 quantify this bias. We find that the extrinsic parameters that are primarily determined by the signal amplitude are more biased than the intrinsic ones that are measured through the GW phase, as also discussed in Powell (2018). Figure 2: Credible intervals for the glitch (orange) and the CBC (blue) signal reconstruction for data containing a blip glitch in LIGO Hanford and a simulated CBC signal at $3$ different times with respect to the glitch (left to right). Shaded regions correspond to 90% credible intervals for the whitened reconstruction, while in grey dashed lines we plot the data whitened with a fair draw PSD from our noise model posterior. The top row corresponds to LIGO Hanford and the bottom row corresponds to LIGO Livingston. Figure 3: One- and two-dimensional posterior distributions for selected source parameters of the simulated signal from the left panel of Fig. 2 injected on top of a LIGO Hanford blip glitch. We include the mass ratio $q$, the effective spin $\chi_{\textrm{eff}}$, and the detector frame chirp mass ${\cal{M}}$ posteriors, while black crosses or black vertical lines denote the true parameters of the injection. Blue (orange) contours and lines correspond to the CBC+glitch+noise (CBC+noise) run. Figure 4: Two-dimensional posterior distributions for the luminosity distance and the binary inclination of the simulated signal from the left panel of Fig. 2 injected on top of a LIGO Hanford blip glitch. A black cross at (1,1200Mpc) denotes the true parameters of the injection. Blue (orange) contours correspond to the CBC+glitch+noise (CBC+noise) run. The separation of the CBC signal from the glitch demonstrated in Fig. 2 can be used to produce ready-to-use deglitched data for downstream inference analyses, as was done in Abbott _et al._ (2020a). An estimate of the glitch reconstruction (the median or a fair draw from the glitch model posterior) is subtracted from the data to produce strain data that contain only the CBC signal and Gaussian noise. The result of the glitch subtraction is shown in the spectrograms of Fig. 5 that show the LIGO Hanford data before (left) and after (middle) the subtraction of a fair draw glitch reconstruction for the leftmost injection of Fig. 2. The left panel includes both the chirping signal and the blip glitch, while only the former is visible in the middle panel. The right panel shows the data after a fair draw from both the CBC and the glitch models has been subtracted, resulting in residual Gaussian noise only. Figure 5: Spectrogram of the LIGO Hanford data around the time of the blip glitch for the leftmost injection from Fig. 2. Left panel: data containing the blip glitch and the simulated CBC signal. Middle panel: data after a fair draw from the glitch model has been subtracted leaving behind only the chirping CBC signal. Right panel: data after a fair draw from the glitch and CBC models has been subtracted, leaving behind only Gaussian detector noise. ### III.2 Glitch type 2: Scattered light Glitches caused by scattered light in the interferometer became particularly prominent during O3 Abbott _et al._ (2020a). Unlike the blip glitches studied above, scattered light glitches have a longer temporal duration of a few seconds and are characterized by arches in a time-frequency spectrogram Accadia _et al._ (2010); Soni _et al._ (2020), as depicted in the middle panel of Fig. 1. We inject simulated signals on an instance of such a glitch in LIGO Livingston and analyze the data from both LIGO detectors with our CBC+glitch+noise model. Details of the glitch and the run settings are given in Table 1. Due to the duration of the glitch and its low frequency power we extend our analysis duration and bandwidth. The longer duration helps the noise model determine the low-frequency Gaussian noise PSD and thus separate the low frequency part of the glitch from Gaussian noise. We also increase the maximum quality factor $Q_{\textrm{max}}$ of the wavelets due to the glitch’s long duration. Figure 6 shows the data and reconstructed CBC and glitch models. We zoom in around the CBC signals, though the glitch extends beyond the time range plotted. In all cases the CBC signal is separated from the glitch, aided by the presence of a coherent signal in LIGO Hanford. The glitch reconstruction is also consistent for all $3$ simulated signals, as expected for runs on the same glitch. The reconstruction exhibits oscillations at around $32$Hz and $16$Hz, consistent with expectations from the glitch spectrogram. Figure 7 shows posterior distributions for selected source parameters for the left-most injection in blue, as well as the injected parameters. In all cases the recovered parameters are consistent with their injected values. In orange, we plot results from a CBC+noise run and find small biases in the source intrinsic parameters, most notably the mass ratio. Figure 6: Credible intervals for the glitch (orange) and the CBC (blue) signal reconstruction for data containing a scattered light glitch in LIGO Livingston and a simulated CBC signal at $3$ different times with respect to the glitch (left to right). Shaded regions correspond to 90% credible intervals, while in grey dashed lines we plot the data whitened with a fair draw PSD from our noise model posterior. The top row corresponds to LIGO Hanford while the bottom row corresponds to LIGO Livingston. Figure 7: One- and two-dimensional posterior distribution for selected source parameters of the simulated signal from the left panels of Fig. 6 injected on top of a LIGO Livingston scattered light glitch. We include the mass ratio $q$, the effective spin $\chi_{\textrm{eff}}$, and the detector frame chirp mass ${\cal{M}}$ posteriors, while black crosses or black vertical lines denote the true parameters of the injection. Blue (orange) contours and lines correspond to the CBC+glitch+noise (CBC+noise) run. Finally, Fig. 8 shows the spectrogram of the data before and after various components of the model have been subtracted from the data. The left panel corresponds to data that contain both a signal and the glitch and thus both the signal chirp and the characteristic glitch arches are visible. In the middle panel we plot data after a fair draw from the glitch model has been subtracted, resulting in both the high and the low frequency arches of the glitch having been regressed, leaving only the chirping signal behind. The right panel corresponds to data where a fair draw from the CBC model has further been subtracted and is consistent with Gaussian noise. Figure 8: Spectrogram of the LIGO Livingston data around the time of the scattered light glitch for the leftmost injection from Fig. 6. Left panel: data containing the scattered light glitch and the simulated CBC signal. Middle panel: data after a fair draw from the glitch model has been subtracted leaving behind only the chirping CBC signal. Right panel: data after a fair draw from the glitch and CBC models has been subtracted, leaving behind only Gaussian detector noise. ### III.3 Glitch type 3: Blue mountain Figure 9: Credible intervals for the glitch (orange) and the CBC (blue) signal reconstruction for data containing a blue mountain glitch in LIGO Hanford and a simulated CBC signal at $3$ different times with respect to the glitch (left to right). Shaded regions correspond to 90% credible intervals, while in grey dashed lines we plot the data whitened with a fair draw PSD from our noise model posterior. The top row corresponds to LIGO Hanford while the bottom row corresponds to LIGO Livingston. Figure 10: One- and two-dimensional posterior distribution for selected source parameters of the simulated signal from the left panels of Fig. 9 injected on top of a LIGO Hanford blue mountain glitch. We include the mass ratio $q$, the effective spin $\chi_{\textrm{eff}}$, and the detector frame chirp mass ${\cal{M}}$ posteriors, while black crosses or black vertical lines denote the true parameters of the injection. The final type of glitch we consider is the blue mountain; the spectrogram of the LIGO Hanford instance of a blue mountain glitch we consider is shown in the right panel of Fig. 1. The glitch has a duration of multiple seconds and is characterized by higher frequencies $\sim 200$Hz. We inject simulated signals at different times relative to the glitch and again analyze data from the two LIGO detectors with the CBC+glitch+noise model with settings shown in Table 1. Due to the large glitch duration we have to increase the length of the analyzed segment even further to $16$s. Despite the glitch’s overall long duration, we do not find it necessary to increase the wavelet maximum quality factor $Q_{\textrm{max}}$, as the glitch is composed of short individual bursts of power, each of which is modeled by individual wavelets with a small quality factor. Figure 9 shows the whitened data and credible intervals for the whitened CBC and glitch reconstruction in each detector for each of the injected signals. Due to the large glitch duration, the signals are injected sufficiently wide apart that the reconstruction plots show non overlapping parts of the data and the glitch. The glitch reconstructions are therefore not expected to match. As expected from the glitch spectrogram, the glitch is characterized by a series of short high frequency bursts, each of which is modeled by different wavelets within our glitch model. Figure 10 shows posterior distributions for selected source parameters for the left-most injection in blue, as well as the injected parameters. In all cases the recovered parameters are consistent with their injected values, suggesting that the presence of the glitch does not incur biases on the inferred source properties if the two are modeled simultaneously. As before, we also plot results from a CBC+noise run that neglects the glitch in the data in orange and again find small biases by the presence of the glitch in the source intrinsic parameters. The glitch subtraction process is detailed in Fig. 11 that again shows spectrograms of the original data containing both the glitch and the signal (left), data after a fair draw glitch model has been subtracted (middle), and data after both the glitch and the fair draw CBC model have been removed (right). As before, data from the middle panel could be used for further data processing. The right panel shows data where a model for both the glitch and the CBC have been subtracted. Even though the majority of the glitch power is absent (compare the left and right panels), some small non Gaussian power might be left behind. The reason for this is that the blue mountain glitch is manifested as individual short bursts of glitch power, which our flexible analysis attempts to model completely independently. Indeed, the glitch model for this run uses ${\cal{O}}(70)$ wavelets. Each of these wavelets, needs to model sufficient non-Gaussian power in the data in order to overcome the parsimony penalty incurred by adding more parameters to the model. As such, we expect that some of the weaker “bursts” of the glitch will not be recovered. Possible ways to alleviate this are discussed in Sec. V. Figure 11: Spectrogram of the LIGO Livingston data around the time of the blue mountain glitch for the leftmost injection from Fig. 9. Left panel: data containing the blue mountain glitch and the simulated CBC signal. Middle panel: data after a fair draw from the glitch model has been subtracted leaving behind only the chirping CBC signal. Right panel: data after a fair draw from the glitch and CBC models has been subtracted, leaving behind only Gaussian detector noise. ## IV Gravitational Wave Events As a further demonstration of our CBC+glitch+noise model, we also analyze two astrophysical events, GW170817 Abbott _et al._ (2017) and GW150914 Abbott _et al._ (2016a) whose data are available from GWOSC Gravitational Wave Open Science Center () (GWOSC); Abbott _et al._ (2019). Though not the main focus of this paper, the analysis presented below also provides an estimate of the effect of marginalizing over the noise PSD has on the inferred astrophysical parameters. More details about this effect will be presented in a separate study. ### IV.1 GW170817 Perhaps the most known instance of a GW signal overlapping with an instrumental glitch is GW170817 Abbott _et al._ (2017). Inference on the GW170817 source properties is performed on data where the glitch in LIGO Livingston has been modeled with BayesWave’s glitch-only model and subtracted. Analysis of simulated signals suggests that this procedure leads to unbiased inference, while any analysis on data that contain the glitch results in highly biased source parameters Pankow _et al._ (2018). Both versions of the data are publicly available, both with and without the glitch BayesWave Glitch Subtraction for GW170817 , so we analyze them both with different models. We use data from the LIGO Hanford and the LIGO Livingston detectors and analyze $64$s of data from $16$Hz to $2048$Hz using the IMRPhenomD_NRTides waveform model that includes finite-size effects Dietrich _et al._ (2019). We employ our CBC+glitch+noise model on the data with the glitch and the CBC+noise model on data where the glitch has already been subtracted. For the CBC+glitch+noise case we use GlitchBuster Cornish _et al._ (2020) to provide a quick fit to the glitch and use that as a starting point for our glitch model during sampling. Figure 12: Credible intervals for the glitch (orange) and the CBC (blue) signal reconstruction for GW170817. Shaded regions correspond to 90% credible intervals, while in grey dashed lines we plot the data whitened with a fair draw PSD from our noise model posterior. The top row corresponds to LIGO Hanford while the bottom row corresponds to LIGO Livingston. The LIGO Hanford plot zooms in to show the signal that is invisible in the LIGO Livingston plot due to the size of the glitch; note the y-scale difference in the two plots Figure 13: One- and two-dimensional posterior distribution for selected source parameters for GW170817. We include the mass ratio $q$, the effective spin $\chi_{\textrm{eff}}$, and the detector frame chirp mass ${\cal{M}}$ posteriors. Blue curves show posteriors under the CBC+glitch+noise model on the full data, while orange curves correspond to the CBC+noise model on data where the glitch has already been subtracted. The two sets of results are consistent with each other. Credible intervals for the signal and glitch reconstructions are shown in Fig. 12 for each detector for $\sim 150$ms of data around the glitch. Despite its high SNR, GW170817 had a relatively low amplitude, so the LIGO Hanford plot has been zoomed in to make the signal visible. The LIGO Livingston data are dominated by the glitch, peaking at $\sim 150\sigma$ relative to the background detector Gaussian noise. The signal is not visible in the LIGO Livingston data given the plotting scale. Figure 13 shows selected source parameters obtained from data both with and without the glitch. We find consistent results, showing that our combined CBC+glitch+noise analysis can faithfully fit the CBC signal and the glitch simultaneously, without the need for the two step process of first removing the glitch and then reanalyzing the data. ### IV.2 GW150914 The first GW signal directly detected by the LIGO detectors, GW150914 Abbott _et al._ (2016a), did not overlap with an instrumental glitch Abbott _et al._ (2016b). However, since it is one of the best studied and loudest signals, we select it as a demonstration of our analysis on data without glitches. Our glitch model has the flexibility to use no glitch wavelets, we therefore expect many samples in the glitch model posterior to contain exactly zero glitch power. We analyze $4$s of data starting at $16$Hz and with a sampling rate of $2048$Hz. We perform two runs, one with the CBC+glitch+noise model and one with the CBC+glitch model using otherwise identical settings. Relevant results are shown in Figs. 14 and 15 where as before we plot the CBC and glitch reconstructions of the CBC+glitch+noise model in the two detectors and the recovered source parameters. The CBC reconstruction of Fig. 14 is consistent with previous results Abbott _et al._ (2016c). The glitch reconstruction is too small to identify in the scale of the plot, as we find that $86\%$ and $14\%$ of our posterior samples had exactly zero glitch wavelets in LIGO Hanford and LIGO Livingston respectively. Figure 15 shows the posterior distribution for selected source parameters of GW150914 obtained under the CBC+glitch+noise and the CBC+noise models. The two posteriors yield consistent results, showing that the glitch model does not affect the CBC parameters when no glitch is present in the data. Figure 14: Credible intervals for the glitch (orange) and the CBC (blue) signal reconstruction for GW150914. Shaded regions correspond to 90% credible intervals, while in grey dashed lines we plot the data whitened with a fair draw PSD from our noise model posterior. The top row corresponds to LIGO Hanford while the bottom row corresponds to LIGO Livingston. Our glitch model recovers essentially no incoherent power coincident with the astrophysical signal and therefore the reconstruction is not visible in the scale of the plot. Figure 15: One- and two-dimensional posterior distribution for selected source parameters for GW150914. We include the mass ratio $q$, the effective spin $\chi_{\textrm{eff}}$, and the detector frame chirp mass ${\cal{M}}$ posteriors. Blue curves show posteriors under the CBC+glitch+noise model, while orange curves correspond to the CBC+noise model. The two sets of results are consistent with each other. ## V Conclusions We construct and validate an analysis of GW data that simultaneously models astrophysical CBC signals and instrumental glitches. We test the analysis against real instances of glitches in the two LIGO detectors from O2 data and simulated CBC signals injected at different times with respect to the glitch. We find that our analysis can separate the two, and provide both estimates for the CBC source parameters and glitch-subtracted data for subsequent analyses. The glitch model we employ is a sum of sine-Gaussian wavelets that is not tuned to any specific glitch type and morphology; it can thus handle even novel glitch types that might first appear during O4. Even though this flexibility is desirable given the unpredictable and evolving nature of glitches, the efficacy of glitch subtraction can be improved by employing targeted priors for different glitch types. One such example would be a prior that anticipates arches at frequency multiples in the case of scattered light glitches. We leave such targeted priors to future work. Our analysis considered only simulated BBH signals, though we also present an analysis of the BNS GW170817. We expect overlapping CBCs and glitches of similar duration to be a worse-case-scenario due to their similar morphology Abbott _et al._ (2018); Davis _et al._ (2020). Given that, we plan to carry out a larger scale study of our CBC+glitch analysis that includes more glitch types and CBC classes, such as BNSs and lower mass BBHs. Additionally, the analysis presented here did not make use of GlitchBuster Cornish _et al._ (2020) to provide initial fits to the glitch, apart from the GW170817 case. In the future we plan to investigate interfacing GlitchBuster and BayesWave in more detail, in the hopes that an efficient starting point for the glitch model during sampling will decrease the sampler’s convergence time and result in ready-to-use glitch-subtracted data more quickly. We hope that our analysis will contribute to robust and efficient glitch mitigation against the increased event rate anticipated in O4; our goal is to facilitate analysis of as much data as possible and maximize the science output of the upcoming observations. ###### Acknowledgements. We thank Derek Davis, Laura Nuttall, and Jessica McIver for sharing preliminary results and datasets for LIGO glitches and CBC injections. This research has made use of data, software and/or web tools obtained from the Gravitational Wave Open Science Center (https://www.gw-openscience.org), a service of LIGO Laboratory, the LIGO Scientific Collaboration and the Virgo Collaboration. LIGO is funded by the U.S. National Science Foundation. Virgo is funded by the French Centre National de Recherche Scientifique (CNRS), the Italian Istituto Nazionale della Fisica Nucleare (INFN) and the Dutch Nikhef, with contributions by Polish and Hungarian institutes. The authors are grateful for computational resources provided by the LIGO Laboratory and supported by National Science Foundation Grants PHY-0757058 and PHY-0823459. N.J.C. appreciates the support provided by NSF award PHY-1912053. M.W. gratefully acknowledges support and hospitality from the Simons Foundation through the pre-doctoral program at the Center for Computational Astrophysics, Flatiron Institute. The Flatiron Institute is supported by the Simons Foundation. Software: gwpy Macleod _et al._ (2020), matplotlib Hunter (2007). ## References * Aasi _et al._ (2015) J. Aasi _et al._ (LIGO Scientific), Class. Quant. Grav. 32, 074001 (2015), arXiv:1411.4547 [gr-qc] . * Acernese _et al._ (2015) F. Acernese _et al._ (Virgo Collaboration), Class. Quant. Grav. 32, 024001 (2015), arXiv:1408.3978 [gr-qc] . * Abbott _et al._ (2020a) R. Abbott _et al._ (LIGO Scientific, Virgo), (2020a), arXiv:2010.14527 [gr-qc] . * Abbott _et al._ (2017) B. P. Abbott _et al._ (LIGO Scientific Collaboration, Virgo Collaboration), Phys. Rev. Lett. 119, 161101 (2017), arXiv:1710.05832 [gr-qc] . * Abbott _et al._ (2013) B. P. Abbott _et al._ (LIGO Scientific Collaboration, Virgo Collaboration), Living Rev. Rel. 19, 1 (2013), arXiv:1304.0670 [gr-qc] . * Chatziioannou _et al._ (2019) K. Chatziioannou, C.-J. Haster, T. B. Littenberg, W. M. Farr, S. Ghonge, M. Millhouse, J. A. Clark, and N. Cornish, Phys. Rev. D 100, 104004 (2019). * Veitch _et al._ (2015) J. Veitch _et al._ , Phys. Rev. D 91, 042003 (2015), arXiv:1409.7215 [gr-qc] . * Abbott _et al._ (2020b) B. P. Abbott _et al._ (LIGO Scientific, Virgo), Class. Quant. Grav. 37, 055002 (2020b), arXiv:1908.11170 [gr-qc] . * Rover _et al._ (2011) C. Rover, R. Meyer, and N. Christensen, Class. Quant. Grav. 28, 015010 (2011), arXiv:0804.3853 [stat.ME] . * Talbot and Thrane (2020) C. Talbot and E. Thrane, (2020), arXiv:2006.05292 [astro-ph.IM] . * Usman _et al._ (2016) S. A. Usman _et al._ , Class. Quant. Grav. 33, 215004 (2016), arXiv:1508.02357 [gr-qc] . * Sachdev _et al._ (2019) S. Sachdev _et al._ , (2019), arXiv:1901.08580 [gr-qc] . * Zackay _et al._ (2019) B. Zackay, T. Venumadhav, J. Roulet, L. Dai, and M. Zaldarriaga, (2019), arXiv:1908.05644 [astro-ph.IM] . * DeRosa _et al._ (2012) R. DeRosa, J. C. Driggers, D. Atkinson, H. Miao, V. Frolov, M. Landry, J. A. Giaime, and R. X. Adhikari, Classical and Quantum Gravity 29, 215008 (2012), arXiv:1204.5504 [physics.ins-det] . * Tiwari _et al._ (2015) V. Tiwari _et al._ , Class. Quant. Grav. 32, 165014 (2015), arXiv:1503.07476 [gr-qc] . * Meadors _et al._ (2014) G. D. Meadors, K. Kawabe, and K. Riles, Class. Quant. Grav. 31, 105014 (2014), arXiv:1311.6835 [astro-ph.IM] . * Driggers _et al._ (2019) J. Driggers _et al._ (LIGO Scientific), Phys. Rev. D 99, 042001 (2019), arXiv:1806.00532 [astro-ph.IM] . * Davis _et al._ (2019) D. Davis, T. Massinger, A. Lundgren, J. Driggers, A. Urban, and L. Nuttall, Class. Quant. Grav. 36, 055011 (2019), arXiv:1809.05348 [astro-ph.IM] . * Vajente _et al._ (2020) G. Vajente, Y. Huang, M. Isi, J. C. Driggers, J. S. Kissel, M. J. Szczepanczyk, and S. Vitale, Phys. Rev. D 101, 042003 (2020), arXiv:1911.09083 [gr-qc] . * Ormiston _et al._ (2020) R. Ormiston, T. Nguyen, M. Coughlin, R. X. Adhikari, and E. Katsavounidis, Phys. Rev. Res. 2, 033066 (2020), arXiv:2005.06534 [astro-ph.IM] . * Coughlin _et al._ (2019) S. Coughlin _et al._ , Phys. Rev. D 99, 082002 (2019), arXiv:1903.04058 [astro-ph.IM] . * Cornish and Littenberg (2015) N. J. Cornish and T. B. Littenberg, Class. Quant. Grav. 32, 135012 (2015), arXiv:1410.3835 [gr-qc] . * Cornish _et al._ (2020) N. J. Cornish, T. B. Littenberg, B. Bécsy, K. Chatziioannou, J. A. Clark, S. Ghonge, and M. Millhouse, (2020), arXiv:2011.09494 [gr-qc] . * Green (1995) P. J. Green, Biometrika 82, 711 (1995), http://oup.prod.sis.lan/biomet/article-pdf/82/4/711/699533/82-4-711.pdf . * Littenberg and Cornish (2015) T. B. Littenberg and N. J. Cornish, Phys. Rev. D91, 084034 (2015), arXiv:1410.3852 [gr-qc] . * (26) BayesWave Glitch Subtraction for GW170817, https://dcc.ligo.org/LIGO-T1700406/public. * Pankow _et al._ (2018) C. Pankow _et al._ , Phys. Rev. D98, 084016 (2018), arXiv:1808.03619 [gr-qc] . * LIGO Scientific Collaboration, Virgo Collaboration (2018) LIGO Scientific Collaboration, Virgo Collaboration, “LALSuite,” https://git.ligo.org/lscsoft/lalsuite (2018). * Cornish (2021) N. J. Cornish, (2021), arXiv:2101.01188 [gr-qc] . * Cornish and Shuman (2020) N. J. Cornish and K. Shuman, Phys. Rev. D 101, 124008 (2020), arXiv:2005.03610 [gr-qc] . * Cornish (2016) N. J. Cornish, (2016), arXiv:1606.00953 [gr-qc] . * Gravitational Wave Open Science Center () (GWOSC) Gravitational Wave Open Science Center (GWOSC), https://www.gw-openscience.org/. * Abbott _et al._ (2019) R. Abbott _et al._ (LIGO Scientific, Virgo), (2019), arXiv:1912.11716 [gr-qc] . * Husa _et al._ (2016) S. Husa, S. Khan, M. Hannam, M. Pürrer, F. Ohme, X. Jiménez Forteza, and A. Bohé, Phys. Rev. D 93, 044006 (2016), arXiv:1508.07250 [gr-qc] . * Khan _et al._ (2016) S. Khan, S. Husa, M. Hannam, F. Ohme, M. Pürrer, X. Jiménez Forteza, and A. Bohé, Phys. Rev. D 93, 044007 (2016), arXiv:1508.07253 [gr-qc] . * Cabero _et al._ (2019) M. Cabero _et al._ , Class. Quant. Grav. 36, 15 (2019), arXiv:1901.05093 [physics.ins-det] . * Powell (2018) J. Powell, Class. Quant. Grav. 35, 155017 (2018), arXiv:1803.11346 [astro-ph.IM] . * Accadia _et al._ (2010) T. Accadia _et al._ , Classical and Quantum Gravity 27, 194011 (2010). * Soni _et al._ (2020) S. Soni _et al._ , arXiv e-prints , arXiv:2007.14876 (2020), arXiv:2007.14876 [astro-ph.IM] . * Abbott _et al._ (2016a) B. Abbott _et al._ (LIGO Scientific, Virgo), Phys. Rev. Lett. 116, 061102 (2016a), arXiv:1602.03837 [gr-qc] . * Dietrich _et al._ (2019) T. Dietrich _et al._ , Phys. Rev. D99, 024029 (2019), arXiv:1804.02235 [gr-qc] . * Abbott _et al._ (2016b) B. P. Abbott _et al._ (LIGO Scientific, Virgo), Class. Quant. Grav. 33, 134001 (2016b), arXiv:1602.03844 [gr-qc] . * Abbott _et al._ (2016c) B. P. Abbott _et al._ (LIGO Scientific, Virgo), Phys. Rev. Lett. 116, 241102 (2016c), arXiv:1602.03840 [gr-qc] . * Abbott _et al._ (2018) B. P. Abbott _et al._ (LIGO Scientific, Virgo), Class. Quant. Grav. 35, 065010 (2018), arXiv:1710.02185 [gr-qc] . * Davis _et al._ (2020) D. Davis, L. V. White, and P. R. Saulson, Class. Quant. Grav. 37, 145001 (2020), arXiv:2002.09429 [gr-qc] . * Macleod _et al._ (2020) D. Macleod, A. L. Urban, S. Coughlin, T. Massinger, M. Pitkin, paulaltin, J. Areeda, E. Quintero, T. G. Badger, L. Singer, and K. Leinweber, “gwpy/gwpy: 1.0.1,” (2020). * Hunter (2007) J. D. Hunter, Computing In Science & Engineering 9, 90 (2007).
8k
arxiv_papers
2101.01201
# AT2017gfo: Bayesian inference and model selection of multi-component kilonovae and constraints on the neutron star equation of state Matteo Breschi1, Albino Perego2,3, Sebastiano Bernuzzi1, Walter Del Pozzo4,5, Vsevolod Nedora1, David Radice6,7,8, Diego Vescovi9,10,11 1Theoretisch-Physikalisches Institut, Friedrich-Schiller-Universität Jena, Fröbelstieg 1, 07743, Jena, Germany 2Dipartimento di Fisica, Universitá di Trento, Via Sommarive 14, 38123, Trento, Italy 3INFN-TIFPA, Trento Institute for Fundamental Physics and Applications, via Sommarive 14, 38123, Trento, Italy 4Dipartimento di Fisica “Enrico Fermi”, Universitá di Pisa, Largo B. Pontecorvo 14, 56127, Pisa, Italy 5INFN, Sezione di Pisa, Largo B. Pontecorvo 14, 56127, Pisa, Italy 6Institute for Gravitation & the Cosmos, The Pennsylvania State University, University Park, PA 16802, USA 7Department of Physics, The Pennsylvania State University, University Park, PA 16802, USA 8Department of Astronomy & Astrophysics, The Pennsylvania State University, University Park, PA 16802, USA 9Gran Sasso Science Institute, Viale F. Crispi 7, 67100, L’Aquila, Italy 10INFN, Sezione di Perugia, Via A. Pascoli 23, 06123, Perugia, Italy 11INAF, Observatory of Abruzzo, Via M. Maggini, 64100, Teramo, Italy (Accepted 2021 May 3. Received 2021 May 3; in original form 2021 January 7 ) ###### Abstract The joint detection of the gravitational wave GW170817, of the short $\gamma$-ray burst GRB170817A and of the kilonova AT2017gfo, generated by the the binary neutron star merger observed on August 17, 2017, is a milestone in multimessenger astronomy and provides new constraints on the neutron star equation of state. We perform Bayesian inference and model selection on AT2017gfo using semi-analytical, multi-components models that also account for non-spherical ejecta. Observational data favor anisotropic geometries to spherically symmetric profiles, with a log-Bayes’ factor of ${\sim}10^{4}$, and favor multi-component models against single-component ones. The best fitting model is an anisotropic three-component composed of dynamical ejecta plus neutrino and viscous winds. Using the dynamical ejecta parameters inferred from the best-fitting model and numerical-relativity relations connecting the ejecta properties to the binary properties, we constrain the binary mass ratio to $q<1.54$ and the reduced tidal parameter to $120<\tilde{\Lambda}<1110$. Finally, we combine the predictions from AT2017gfo with those from GW170817, constraining the radius of a neutron star of $1.4~{}{\rm M_{\odot}}$ to $12.2\pm 0.5~{}{\rm km}$ ($1\sigma$ level). This prediction could be further strengthened by improving kilonova models with numerical-relativity information. ###### keywords: transients: neutron star mergers – methods: data analysis– equation of state ## 1 Introduction On August 17, 2017, the ground-based interferometers of LIGO and Virgo Abbott et al. (2018a); Aasi et al. (2015); Acernese et al. (2015) detected the first gravitational-wave (GW) signal coming from a binary neutron star (BNS) merger, known as GW170817 Abbott et al. (2017a). GW170817 was followed by a short gamma-ray burst (GRB) GRB170817A Abbott et al. (2017c); Savchenko et al. (2017), which reached the space observatories Fermi Ajello et al. (2016) and INTEGRAL Winkler et al. (2011) ${\sim}1.7\,{\rm s}$ after coalescence time. Eleven hours later, several telescopes started to collect photometric and spectroscopical data from AT2017gfo, an unprecedented electromagnetic (EM) kilonova transient Coulter et al. (2017); Chornock et al. (2017); Nicholl et al. (2017); Cowperthwaite et al. (2017); Pian et al. (2017); Smartt et al. (2017); Tanvir et al. (2017); Tanaka et al. (2017); Valenti et al. (2017) coming from a coincident region of the sky. Kilonovae (kNe) are quasi-thermal EM emissions interpreted as distinctive signature of $r$-process nucleosynthesis in the neutron-rich matter ejected from the merger and from the subsequent BNS remnant evolution Smartt et al. (2017); Kasen et al. (2017); Rosswog et al. (2018); Metzger (2020); Kawaguchi et al. (2020). The follow up of the source lasted for more than a month and included also non- thermal emission from the GRB170817A afterglow (e.g., Nynka et al., 2018; Hajela et al., 2019). The combined observation of GW170817, GRB170817A and AT2017gfo decreed the dawn of multimessenger astronomy with compact binaries Abbott et al. (2017b). From these multimessenger observations it is possible to infer unique information on the unknown equation of state (EOS) of neutron star (NS) matter, (e.g. Radice et al., 2017; Margalit & Metzger, 2017; Bauswein et al., 2017; Radice et al., 2018b; Dietrich et al., 2018). Indeed, the EOS determines the tidal polarizability parameters that describe tidal interactions during the inspiral-merger and characterize the GW signal Damour et al. (2012); Bernuzzi et al. (2014). It also determines the outcome of BNS mergers (e.g. Shibata et al., 2005; Bernuzzi et al., 2015a, 2020) and the subsequent postmerger GW signal from the remnant (e.g. Bauswein et al., 2014; Bernuzzi et al., 2015b; Zappa et al., 2019; Agathos et al., 2020; Breschi et al., 2019). At the same time, the amount of mass, the velocity, and the composition of the ejecta are also strongly dependent on the EOS, that has an imprint on the kN signature, e.g. Hotokezaka et al. (2013); Bauswein et al. (2013); Radice et al. (2018d); Radice et al. (2018a). The spectrum of AT2017gfo was recorded from ultraviolet (UV) to near infrared (NIR) frequencies (e.g., Pian et al., 2017; Nakar et al., 2018), and the observations showed several characteristic features. At early stages, the kN was very bright and its spectrum peaked in the blue band 1 day after the merger (blue kN). After that, the peak of the spectrum moved towards larger wavelengths, peaking at NIR frequencies between five to seven days after merger (red kN). Minimal models that can explain these features require more than one component. In particular, minimal fitting models assume spherical symmetry and include a lathanide-rich ejecta responsible for the red kN, typically interpreted as dynamical ejecta, and another ejecta with material partially reprocessed by weak interaction, responsible for the blue component (e.g., Villar et al., 2017b). Numerical relativity (NR) simulations show that the geometry profiles of the ejecta are not always spherically symmetric and their distributions are not homogeneous Perego et al. (2017a). Moreover, NR simulations also indicate the presence of multiple ejecta components, from the dynamical to the disk winds ejecta Rosswog et al. (2014); Fernández et al. (2015); Metzger & Fernández (2014); Perego et al. (2014); Nedora et al. (2019). Therefore, this information has to be taken into account during the inference of the kN properties. The modeling of kNe is a challenging problem, due to the complexity of the underlying physics, which is affected by a diverse interactions and scales (see Metzger, 2020, and references therein). Together with the choice of ejecta profiles, the lack of a reliable description of the radiation transport is a relevant source of uncertainties in the modeling of kNe, due to the incomplete knowledge on the thermalization processes Korobkin et al. (2012); Barnes et al. (2016) and on the energy-dependent photon opacities in $r$-process matter Tanaka et al. (2020); Even et al. (2020). Current kN models often use either simplistic ejecta profiles or simplistic radiation schemes, (e.g., Grossman et al., 2014; Villar et al., 2017b; Coughlin et al., 2017; Perego et al., 2017a). Given the challenges and uncertainties associated to the theoretical prediction of kN features, Bayesian inference and model selection of the observational data can provide important insights on physical processes hidden in the kN signature. In this work, we explore model selection in geometrical and ejecta properties using simplified light curve (LC) models, that nonetheless capture the key features of the problem. The inference results are then employed to derive constraints on the neutron star EOS. In Sec. 2, we describe the semi- analytical model and the ejecta components used in our analysis. In Sec. 3, we recall the Bayesian framework for model selection, highlighting the choices of the relevant statistical quantities, such as likelihood function and prior distributions. In Sec. 4, we discuss the inference on AT2017gfo, critically examining the posterior samples in light of targeted NR simulations Perego et al. (2019); Nedora et al. (2019); Endrizzi et al. (2020); Nedora et al. (2021); Bernuzzi et al. (2020) and previous analyses. In Sec. 5, we discuss new constraints on the NS EOS focusing first on mass ratio and reduced tidal parameter for the source of GW170817, and then on the neutron star radius ${R_{1.4}}$. We conclude in Sec. 6. ## 2 Kilonova model In this section, we first summarize basic analytical results and scaling relations that characterize the kN emission, and then describe in detail the models we employ for the ejecta components and LC calculations. ### 2.1 Basic features Let us consider a shell of ejected matter characterized by a mass density $\varrho$, with total mass ${m}$ and gray opacity $\kappa$ (mean cross section per unit mass). The shell is in homologous expansion symmetrically with respect to the equatorial plane at velocity $v$, such that its mean radius is $R\sim vt$ after a time $t$ following the merger. Matter opacity to EM radiation can be expressed in terms of the optical depth, $\tau$, which is estimated as $\tau\simeq\varrho\kappa R$. After the BNS collision, when matter becomes unbound and $r$-process nucleosynthesis occurs, the ejecta are extremely hot, $T\sim 10^{9}~{}{\rm K}$ (e.g. de Jesús Mendoza-Temis et al., 2015; Wu et al., 2016; Perego et al., 2019). However, at early times the thermal energy is not dissipated efficiently since the environment is optically thick ($\tau\gg 1$) and photons diffuse out only on the diffusion timescale until they reach the photosphere ($\tau=2/3$). As the outflow expands, its density drops ($\rho\propto t^{-3}$) and the optical depth decreases. The key concept behind kNe is that photons can contribute to the EM emission at a given time $t$ if they diffuse on a timescale comparable to the expansion timescales, i.e., if they escape from the shells outside $R_{\rm diff}$, where $R_{\rm diff}$ is the radius at which the diffusion time $t_{\rm diff}\simeq R\tau/c$ equals the dynamical time $t$ Piran et al. (2013); Grossman et al. (2014); Metzger (2020) . In the previous expression, $c$ is the speed of light. Since $t_{\rm diff}\propto t^{-1}$, a larger and larger portion of the ejecta becomes transparent with time. The luminosity peak of the kN occurs when the bulk of matter that composes the shell becomes transparent. As first approximation, the characteristic timescale at which the light curve peaks is commonly estimated Arnett (1982) as: $t_{\rm peak}=\sqrt{\frac{3{m}\kappa}{4\pi\beta vc}}\,,$ (1) where the dimensionless factor $\beta$ depends on the density profile of the ejecta. For a spherical symmetric, homologously expanding ejecta ($\beta\simeq 3$) with mass ${m}=10^{-2}~{}{\rm M_{\odot}}$, velocity $v=0.1~{}c$ and opacity in the range $\kappa\simeq 1{-}50~{}{\rm cm^{2}\,g^{-1}}$, which are typical values respectively for lanthanide-free and for lanthanide-rich matter Roberts et al. (2011); Kasen et al. (2013), Eq. (1) predicts a characteristic $t_{\rm peak}$ in the range $1$–$10~{}{\rm days}$ Abbott et al. (2017d). In the absence of a heat source, matter would simply cool down through adiabatic expansion. However, the ejected material is continuously heated by the radioactive decays of the $r$-process yields, which provide a time dependent heating rate of nuclear origin. An additional time dependence is introduced by the thermalization efficiency, i.e. the efficiency at which this nuclear energy, released in the form of supra-thermal particles (electrons, daughter nuclei, photons and neutrinos), thermalizes within the expanding ejecta (see, e.g., Metzger & Berger, 2012; Korobkin et al., 2012; Barnes et al., 2016; Hotokezaka et al., 2018). ### 2.2 Light Curves The kN LCs in our work are computed using the multicomponent, anisotropic semi-analytical MKN model first introduced in Ref. Perego et al. (2017a) and largely based on the kN models presented in Refs. Grossman et al. (2014) and Martin et al. (2015) (see also Barbieri et al. (2019)). The ejecta are either spherical or axisymmetric with respect to the rotational axis of the remnant, and symmetric with respect to the equatorial plane. The viewing angle $\iota$ is measured as the angle between the rotational axis and the line of sight of the observer. For each component the ejected material is described through the angular distribution of its ejected mass, ${m}$, root-mean-square (rms) radial velocity, ${v_{\rm rms}}$, and opacity, $\kappa$. In axisymmetric models, the latter quantities are functions of the polar angle $\theta$, measured from the rotational axis and discretized in $N_{\theta}=30$ angular bins evenly spaced in $\cos{\theta}$. Additionally, within each ray, matter is radially distributed with a stationary profile in velocity space, $\xi(v)$ such that $\xi(v)\propto(1-\left(v/v_{\rm max}\right)^{2})^{3}$, where $\xi(v){\rm d}v$ is the matter contained in an infinitesimal layer of speed $\left[v,v+{\rm d}v\right]$, and $v_{\rm max}=v_{\rm max}({v_{\rm rms}})$ is the maximum velocity at the outermost edge of the component. The characteristic quantities $\varrho$, $v$ and $\kappa$ are then evaluated for every bin according to the assumed input profiles. For every bin, we estimate the emitted luminosity using the radial model described in Ref. Perego et al. (2017a) and in §4 of Ref. Barbieri et al. (2020) (see also Barbieri et al. (2019)). In particular, the model assumes that the luminosity is emitted as thermal radiation from the photosphere (of radial coordinate $R_{\rm ph}$), and the luminosity and the photospheric surface determine the effective emission temperature, $T_{\rm eff}$ through the Stefan-Boltzmann law. We expect this assumption to be well verified at early times (with a few days after merger), while deviations from it are expected to become more and more relevant for increasing time. The time-dependent nuclear heating rate $\epsilon_{\rm nuc}$ entering these calculations is approximated by an analytic fitting formula, derived from detailed nucleosynthesis calculations Korobkin et al. (2012), $\epsilon_{\rm nuc}(t)=\epsilon_{0}\,\frac{\epsilon_{\rm th}(t)}{0.5}\,\epsilon_{\rm nr}(t)\,\left[\frac{1}{2}-\frac{1}{\pi}\arctan\left(\frac{t-t_{0}}{\sigma}\right)\right]^{\alpha}\,,$ (2) where $\sigma=0.11~{}{\rm s}$, $t_{0}=1.3~{}{\rm s}$, $\alpha=1.3$ and $\epsilon_{\rm th}(t)$ is the thermalization efficiency tabulated according to Ref. Barnes et al. (2016). The heating factor $\epsilon_{\rm nr}(t)$ is introduced as in Ref. Perego et al. (2017a) to roughly improve the behavior of Eq. (2) in the regime of mildly neutron-rich matter (characterized by an initial electron fraction $Y_{e}\gtrsim 0.25$), (see, e.g. Martin et al., 2015): $\epsilon_{\rm nr}(t,\kappa)=\left[1-w(\kappa)\right]+w(\kappa)\,\epsilon_{Y_{e}}(t)\,,$ (3) where $w(\kappa)$ is a logarithmic smooth clump function such that $w(\kappa<1~{}{\rm cm^{2}\,g^{-1}})=1$ and $w(\kappa>10~{}{\rm cm^{2}\,g^{-1}})=0$ and the factor $\epsilon_{Y_{e}}(t)$ encodes the dependence on $Y_{e}$: if $Y_{e}<0.25$, then $\epsilon_{Y_{e}}(t)=1$, otherwise, when $Y_{e}\geq 0.25$, $\epsilon_{Y_{e}}(t)=\epsilon_{\rm min}+{\epsilon_{\rm max}}{\left[1+e^{4(t/t_{\epsilon}-1)}\right]}^{-1}\,,$ (4) where $t_{\epsilon}=1~{}{\rm day}$, $\epsilon_{\rm min}=0.5$ and $\epsilon_{\rm max}=2.5$. Furthermore, in order to improve the description in the high-frequency bands (i.e., $V$, $U$, $B$ and $g$) within the timescale of the kilonova emission, and following Ref. Villar et al. (2017a), we introduce a floor temperature, i.e. a minimum value for $T_{\rm eff}$. This is physically related to the drop in opacity due to the full recombination of the free electrons occurring when for the matter temperature drops below $T_{\rm floor}$ Kasen et al. (2017); Kasen & Barnes (2019). Under these assumptions, the condition $T_{\rm eff}=T_{\rm floor}$ becomes a good tracker for the photosphere location. Since kNe are powered by the radioactive decay of different blends of atomic species, we introduce in our model two floor temperatures, ${T^{\rm Ni}_{\rm floor}}$ and ${T^{\rm LA}_{\rm floor}}$, that characterize respectively the recombination temperature of lanthanides-free and of lanthanide-rich ejecta. Eventually, the emissions coming from the different rays are combined to obtain the spectral flux at the observer location: $F_{\nu}(\mathbf{n},t)=\int_{\mathbf{n}_{\Omega}\cdot\mathbf{n}>0}\left(\frac{R_{\rm ph}(\Omega,t)}{D_{L}}\right)^{2}B_{\nu}(T_{\rm eff}(\Omega,t))~{}\mathbf{n}\cdot{\rm d}\bm{\Omega}$ (5) where $\mathbf{n}$ is the unitary vector along the line of sight, $\mathbf{n}_{\Omega}$ is the unitary vector spanning the solid angle $\Omega$, $D_{L}$ is the luminosity distance, $R_{\rm ph}$ is the local radial coordinate of the photospheric surface, and $B_{\nu}(T_{\rm eff})$ is the spectral radiance at frequency $\nu$ for a surface of temperature $T_{\rm eff}$. Lastly, from Eq. (5), it is possible to compute the apparent AB magnitude ${\rm mag}_{b}$ in a given photometric band $b$ as: ${\rm mag}_{b}(\mathbf{n},t)=-2.5\log_{10}\left(F_{\nu_{b}}(\mathbf{n},t)\right)-48.6\,,$ (6) where $\nu_{b}$ is the effective central frequency of band $b$. ### 2.3 Multi-Component Model Figure 1: Graphic representation of the analyzed ejecta profiles for isotropic and anisotropic cases from an azimuthal perspective and for a fixed moment of time. The black dot represents the remnant and the dashed line is the projected orbital plane of the binary. The shadowed areas describe the ejecta profiles: the shape characterizes the mass distribution, while the colors refer to the prior assumptions on the opacity parameter. In particular, blue regions denote opacities lower than $5~{}{\rm cm^{2}\,g^{-1}}$, red regions refer to opacities greater than $5~{}{\rm cm^{2}\,g^{-1}}$, and oranges areas indicate a broadly distributed opacity. All shells are isotropically expanding with a constant velocity. In order to describe the different properties of AT2017gfo it is necessary to appeal to a multi-component structure for the ejecta producing the kN. Different components are characterized by different sets of intrinsic parameters, ${m}$, ${v_{\rm rms}}$ and $\kappa$, and by their angular distributions with respect to $\theta$. Given the angular profiles of the characteristic parameters, the physical luminosity produced by each component inside a ray is computed by using the model outlined in the previous section. Then, the total bolometric luminosity of the ray is given by the sum of the single contributions, i.e. $L(t)=\sum_{k}L^{(k)}(t)$ where $k$ runs over the components. The outermost photosphere is the one that determines the thermal spectrum of the emission. Once $R_{\rm ph}$ and $T_{\rm eff}$ have been determined, the spectral flux and the AB magnitudes are computed according to Eqs. (5) and (6). We perform the analysis using two different assumptions on the profiles of the source. Initially, we impose completely isotropic profiles for every parameter of every ejecta component. These cases are labeled as isotropic, ‘ISO’. Subsequently, we introduce angular profiles as functions of the polar angle for the mass and opacity parameters, while we keep ${v_{\rm rms}}$ always isotropic. This second case is labeled as anisotropic, ‘ANI’. In parallel, we explore models with a different number of components. We always assume the presence of the dynamical ejecta, while we add to them one or two qualitatively different disk-wind ejecta components. In the following paragraphs, we describe the physical assumptions on each component and the choice of the prior distributions (see Tab. 1). Fig. 1 shows a graphical representation of the employed ejecta components. ##### Dynamical ejecta $({\rm D})$. The BNS collision ejects unbound matter on the dynamical timescale, whose properties strongly depend on the total mass of the BNS, on the mass ratio and on the EOS (e.g. Hotokezaka et al., 2013; Rosswog et al., 2013; Bauswein et al., 2013; Radice et al., 2016; Bovard et al., 2017; Radice et al., 2018c, d). This ejection is due to tidal torques and shocks developing at the contact interface between the merging stars, when matter is squeezed out by hydrodynamical processes Oechslin et al. (2006); Hotokezaka et al. (2013). The expansion of this ejecta component has a velocity of roughly ${v_{\rm rms}}\sim 0.2\,c$. Moreover, this phenomenon generates a distribution of ejected mass denser in the regions across the orbital plane with respect to the region along its orthogonal axis, characterized by larger opacities at lower latitudes. In particular, neutrino irradiation (if significant), increases the ejecta $Y_{e}$ and prevents the formation of lanthanides. For the anisotropic analyses, the mass profile is taken to be $\varrho(\theta)\propto\sin\theta$, and the opacity profile is take as a step function in the polar angle characterized by the parameters $(\kappa_{\rm low},\kappa_{\rm high})$, respectively for low- and high-latitudes, with a step angle $\theta_{\rm step}=\pi/4$ (see Sec. 3.3). In terms of emitted LC, the described ejecta is characterized by a red equatorial component and a blue contribution at higher latitudes. ##### Neutrino-driven wind $({\rm N})$. Simulations of the remnant evolution in the aftermath of a BNS merger reveal the presence of other ejection mechanisms happening over the thermal and viscous evolution timescales (e.g. Metzger et al., 2008; Fernández & Metzger, 2013; Perego et al., 2014; Perego et al., 2017b; Decoene et al., 2020). If the ejection happens while the remnant is still a relevant source of neutrinos, neutrino irradiation has enough time to increase $Y_{e}$ above 0.25, preventing full $r$-process nucleosynthesis, especially close to the polar axis. Detailed simulations Perego et al. (2014); Martin et al. (2015); Fujibayashi et al. (2018, 2020) show that a relatively small fraction of the expelled disk contributes to this component and its velocity is expected to be ${v_{\rm rms}}\lesssim 0.1c$. For anisotropic analyses, the mass profile is taken to be uniform in the range $\theta\in[0,\pi/3]$ and negligible otherwise, while the opacity profile is takes as as step function in the polar angle, with a step angle $\theta_{\rm step}=\pi/3$. ##### Disk’s viscous ejecta $({\rm V})$. In addition to neutrinos, viscous torques of dynamical and magnetic origin can unbind matter from the disk around massive NSs or black holes Metzger et al. (2010); Metzger & Fernández (2014); Just et al. (2015). This viscous component is expected to unbound a large fraction of the disk matter on longer timescale, reaching ${m}\lesssim 10^{-1}{\rm M_{\odot}}$, with a relatively low velocity, ${v_{\rm rms}}\lesssim 0.05c$. The corresponding ejecta are more uniformly distributed over the polar angle than the dynamical ejecta and the $\nu$-driven wind ejecta. The presence or the lack of a massive NS in the center can influence the $Y_{e}$ of these ejecta. Then, all angular profiles are assumed to be isotropic for this component Wu et al. (2016); Siegel & Metzger (2018). We conclude this section by recalling that the main motivation behind the usage of the semi-analytic model presented above is the optimal compromise between its robustness and adaptability, essential to model the non-trivial structure of the ejecta, and the reduced computational costs, necessary to perform parameter estimation studies. However, it has been showed that simplified models that avoid the solution of the radiation transport problem can suffer from systematic uncertainties Wollaeger et al. (2018). In particular, the analytical model presented in Grossman et al. (2014), on which ours is based, produces significantly lower light curves. The comparison with observed kN light curves and more detailed kN models showed how larger nuclear heating rates $\epsilon_{0}$ systematically reduce this discrepancy. ## 3 Method In this section, we recall the basic concepts of model selection as they are stated in the Bayesian theory of probability. Then, we describe the statistical technique used for the computations of the Bayes’ factors. As convention, the symbol ‘$\log$’ denotes the natural logarithm while a logarithm to a different base is explicitly written when it is used. ### 3.1 Model Selection Given some data $d$ and a model $H$ (hypothesis) described by a set of parameters ${\bm{\theta}}$, the posterior probability is given by the Bayes’ theorem: $p({\bm{\theta}}|d,H)=\frac{p(d|{\bm{\theta}},H)\,p({\bm{\theta}}|H)}{p(d|H)}\,,$ (7) where $p(d|{\bm{\theta}},H)$ is the likelihood function, $p({\bm{\theta}}|H)$ is the prior probability assigned to the parameters and $p(d|H)$ is the evidence. The latter value plays the role of normalization constant and it can be computed by marginalizing the likelihood function, $p(d|H)=\int_{\Theta}p(d|{\bm{\theta}},H)\,p({\bm{\theta}}|H){\rm d}{\bm{\theta}}\,,$ (8) where the integral is computed over the entire parameters’ space $\Theta$. In the framework of Bayesian theory of probability, we can compare two models, say $A$ and $B$, by computing the ratios of the respective posterior probabilities, also known as Bayes’ factor, $\mathcal{B}_{B}^{A}=\frac{p(A|d,H_{A})}{p(B|d,H_{B})}\,.$ (9) Using Eq. (7) we get: $\mathcal{B}_{B}^{A}=\frac{p(d|A,H_{A})}{p(d|B,H_{B})}\frac{p(A|H_{A})}{p(B|H_{B})}=\frac{p(d|A,H_{A})}{p(d|B,H_{B})}\,,$ (10) where we assumed that the data do not depend on the different hypothesis and that different models are equally likely a priori, i.e. $p(A|H_{A})=p(B|H_{B})$. Now suppose that the two models $A,B$ are respectively described by two sets of parameters ${\bm{\theta}}_{A},{\bm{\theta}}_{B}$. Using the marginalization rule we can write: $p(d|I,H_{I})=\int_{\Theta_{I}}p(d|{\bm{\theta}}_{I},I,H_{I})\,p({\bm{\theta}}_{I}|I,H_{I})\,{\rm d}{\bm{\theta}}_{I}\,,$ (11) for $I=A,B$. The integral in Eq. (11) represents the evidence computed for the hypotheses $H^{\prime}_{I}=\\{H_{I},I\\}$, for $I=A,B$ (i.e. the involved model becomes part of the background hypothesis). Then, we obtain that the Bayes’ factor $\mathcal{B}_{B}^{A}$ can be computed as $\mathcal{B}_{B}^{A}=\frac{p(d|H^{\prime}_{A})}{p(d|H^{\prime}_{B})}\,.$ (12) From the previous results, we understand that if $\mathcal{B}_{B}^{A}>1$ then the model $A$ will be favored by the data, viceversa if $\mathcal{B}_{B}^{A}<1$. It is important to observe that the Bayes’ factor implicitly takes into account the so called Occam’s razor, i.e. if two models are both able to capture the features of the data, then the one with lower number of parameters will be favored Sivia & Skilling (2006). In our analysis, this is a crucial point since different models have different numbers of parameters. ### 3.2 Nested Sampling In a realistic scenario, the form of the likelihood function is analytically indeterminable and the parameter space has a non-trivial number of dimensions. For these reasons, the estimation of Eq. (11) is performed resorting to statistical computational techniques: we employ the nested sampling Bayesian technique introduced in Ref. Skilling (2006) and designed to compute the evidence and explore the full parameter space. The uncertainties associated with the evidence estimations are computed according to Ref. Skilling (2006) and increasing the result by one order of magnitude, in order to conservatively take into account systematics. The latter are expected since the model considered for our analyses (as many others) cannot capture all the physics processes involved in kNe, and it suffers of large uncertainties in the atomic physics and radiative processes implementation. We perform inference with cpnest Pozzo & Veitch , a parallelized nested sampling implementation. We use 1024 live points and, for every step, we set a maximum number of 2048 Markov-chain Monte Carlo (MCMC) iterations for the exploration of the parameter space. The proposal step method used in the MCMC is the same as the one implemented as default in cpnest software. It corresponds to a cycle over four different proposal methods: a random-walk step Goodman & Weare (2010), a stretch move Goodman & Weare (2010), a differential evolution method Nelson et al. (2013) and a proposal based on the eigenvectors of the covariance matrix of the ensemble samples, as implemented in Ref. Veitch et al. (2015). ### 3.3 Choice of Priors Table 1: List of intrinsic and extrinsic parameters involved in the analysis and the respective prior bounds for the cases of anisotropic geometry. For isotropic geometry cases, the bounds are identical except for the opacity $\kappa$ of dynamical component (D), where the low-latitude and high-latitude bounds are joined together. Intrinsic Ejecta Parameters ${\bm{\theta}}_{\rm ej}^{\rm(D,V,N)}$ --- Comp. | ${m}$ | ${v_{\rm rms}}$ | $\kappa_{\rm high}$ | $\kappa_{\rm low}$ | $\theta_{\rm step}$ | $[10^{-2}{\rm M}_{\odot}]$ | $[c]$ | $[{\rm cm^{2}\,g^{-1}}]$ | $[{\rm rad}]$ D | [0.1, 10] | [0.15,0.333] | [0.1,5] | [5,30] | $\pi/4$ N | [0.01,0.75] | [0.05,0.15] | [0.01,5] | $\pi/3$ V | [1,20] | [0.001,0.1] | [0.01,30] | – Intrinsic Global Parameter ${\bm{\theta}}_{\rm glob}$ --- ${T^{\rm Ni}_{\rm floor}}$ | [K] | [500, 8000] ${T^{\rm LA}_{\rm floor}}$ | [K] | [500, 8000] $\epsilon_{0}$ | [${\rm erg\,g^{-1}\,s^{-1}}$] | $[2\times 10^{17},5\times 10^{19}]$ Extrinsic Parameters ${\bm{\theta}}_{\rm ext}$ --- $D_{L}$ | [Mpc] | [15,50] $\iota$ | [deg] | [0,70] In our analysis we assume the sky position of the source to be known and the time of coalesce to be the same of the trigger time of GW170817 Abbott et al. (2017a). Furthermore, we do not take into account the redshift contribution, given the larger systematic uncertainties in the model. We employ the parameters shown in Tab. 1, that can be divided in three subsets: the intrinsic ejecta parameters ${\bm{\theta}}_{\rm ej}^{({\rm D},{\rm V},{\rm N})}$, the intrinsic global parameters ${\bm{\theta}}_{\rm glob}$, and the extrinsic parameters ${\bm{\theta}}_{\rm ext}$ . The intrinsic ejecta parameters, ${\bm{\theta}}_{\rm ej}^{(k)}$ for $k={\rm D},{\rm V},{\rm N}$, characterize the properties of each ejecta component and they are the amount of ejected mass, ${m}$, the rms velocity of the fluid, ${v_{\rm rms}}$, and their grey opacity, $\kappa$. Under the assumption of isotropic geometry, the intrinsic ejecta parameters ${\bm{\theta}}_{\rm ej}^{(k)}$ are defined by a single value for every shell, i.e. a single number characterizes the entire profile of the parameter of interest, since it is spherically symmetric. However, for anisotropic cases, we have to introduce more than one independent parameters to describe an angular profile for a specific variable: this is the case of the opacity parameter of the dynamical component, where the profile is chosen as step functions characterized by two different parameters, $\kappa_{\rm low}$ and $\kappa_{\rm high}$, respectively at low and high latitudes. In such a cases, the angle $\theta_{\rm step}$ is introduced to denote the angle at which the profile changes value, as mentioned in Sec. 2.3. The intrinsic global parameters, ${\bm{\theta}}_{\rm glob}$, represent the properties of the source common to every component, such as the floor temperatures, ${T^{\rm Ni}_{\rm floor}}$ and ${T^{\rm LA}_{\rm floor}}$, and the heating rate constant $\epsilon_{0}$. In principle, the latter is a universal property which defines the nuclear heating rate as expressed in Eq. 2. The whole set of intrinsic parameters, ${\bm{\theta}}_{\rm glob}$ and ${\bm{\theta}}_{\rm ej}^{(k)}$, determines the physical dynamics of the system and, therefore, they determine the properties of the kN emission, irrespectively of the observer location. The extrinsic parameters, ${\bm{\theta}}_{\rm ext}$, are the luminosity distance of the source, $D_{L}$ and the viewing angle $\iota$. These parameters do not depend on the physical properties of the source and they are related with the observed signal through geometrical argumentation. The prior distributions for all the parameters are taken uniform in their bounds, except for the followings. For the extrinsic parameters ${\bm{\theta}}_{\rm ext}=\\{D_{L},\iota\\}$, we set the priors equal to the marginalized posterior distributions coming from the low-spin-prior measurement of GW170817 Abbott et al. (2019b); For the heating rate factor $\epsilon_{0}$, we use a uniform prior distribution in $\log\epsilon_{0}$, i.e. $p(\epsilon_{0}|H)\propto{\epsilon_{0}}^{-1}$, since this parameter strongly affects the LC and it is free to vary in a wide range. Moreover, we adopt a prior range according with the estimation given in Ref. Korobkin et al. (2012). Tab. 1 shows the prior bounds used for the analysis of the anisotropic cases. For the isotropic studies, the bounds are identical except for the opacity $\kappa$ of dynamical component, where the low-latitude and high-latitude bounds are joined together. ### 3.4 Likelihood Function The data $\left\\{d_{b,i}\pm\sigma_{b,i}\right\\}$ are the apparent magnitudes observed from AT2017gfo, with their standard deviations. They have been collected from Villar et al. (2017b), where all the precise reference to the original works and to the data reduction techniques can be found. The index $b$ runs over all considered photometric bands, covering a wide photometric range from the UV to the NIR, while for each band $b$ the index $i$ runs over the corresponding sequence of $N_{b}$ temporal observations. Additionally, the magnitudes have been corrected for Galactic extinction Cardelli et al. (1989). We introduce a Gaussian likelihood function in the apparent magnitudes with mean and variance, $d_{b,i}$, $\sigma^{2}_{b,i}$, from the observations of AT2017gfo, ${\log p({\bm{\theta}}|d,H)}\propto-\frac{1}{2}\sum_{b}\sum_{i=1}^{N_{b}}\frac{\left|d_{b,i}-{\rm mag}_{b,i}({\bm{\theta}})\right|^{2}}{\sigma^{2}_{b,i}}\,,$ (13) where ${\rm mag}_{b,i}({\bm{\theta}})$ are the magnitudes generated by the LC model, of Sec. 2, which encodes the dependency on the parameters ${\bm{\theta}}$, for every band $b$ at different times $i$. The likelihood definition Eq. (13) is in accordance with the residuals introduced in Ref. Perego et al. (2017a) and it takes into account the uncertainties due to possible technical issues of the instruments and generic non-stationary contributions, providing a good characterization of the noise 111Also the work presented in Ref. Villar et al. (2017b) employs a Gaussian likelihood, with the inclusion of an additional uncertainty parameter; while, in Ref. Coughlin et al. (2017), the authors proposed a likelihood distributed as a $\chi^{2}$.. For both geometric configurations, isotropic (ISO) and anisotropic (ANI), we perform Bayesian analyses using different combinations of components, testing the capability to fit the data. ## 4 Results In this section we present the results gathered from the Bayesian analysis. In Sec. 4.1 we describe the capability of the synthetic LCs to fit the observed data. After that, in Sec. 4.2, we discuss the estimated evidence inferring the preferred model. Finally, in Sec. 4.3, we discuss the interpretation of the recovered posterior distributions. ### 4.1 Light Curves Figure 2 shows the LCs computed from the recovered maximum-likelihood parameters for each discussed model. The estimated LCs are compared with AT2017gfo data for six representative photometric bands. Moreover, Fig. 3 shows the uncertainties associated with the estimated LCs, computed over the recovered posterior samples, for each considered model. Generally, the errors associated with the near UV (NUV) magnitudes are larger compared with the other bands, reflecting the lower number of data points in this photometric region. Furthermore, none of the considered model is able to fully capture the trend described by the observed data in the Ks band for time larger then 10 days, within the provided prior bounds. This is expected from the simplified treatment of the radiation transport and the approximated heating rate in our models. The isotropic models (ISO-D and ISO-DV) give a good fitting to the data for early times and their LCs capture the general trends of the data. However, for times larger than ${\sim}8$ days, these models do not capture all the features of the data within the provided prior bounds. This inaccuracy is particularly evident in the NIR, where the LCs predicted by the ISO-D and the ISO-DV models do not recover the correct slopes of the data. The anisotropic single-component case, ANI-D, is apt at adapting the model to the different features present in the data, even for large time-scales. However, it overestimates the kN emission in the blue band. This inconsistency could be reduced allowing the high latitude opacity parameter $\kappa_{\rm high}$ to lower values. Regarding the anisotropic two-components models, the ANI-VN gives a good fitting for early times, but the model largely underestimates the data at times ${\gtrsim}5$ days. This is due to the absence of a fast blue component. The anisotropic ANI-DV model gives LCs similar to ANI-D except for a slight excess of power for time ${\gtrsim}10$ days, especially in the NIR region, i.e. $z$, $K$ and $K_{s}$ bands. This behavior could be mitigated by reducing the lower bound on the ${T^{\rm LA}_{\rm floor}}$ parameter. However, it could also indicate a significant deviation from the black-body emission adopted in our model at late times. Furthermore, the ANI-DV model overshoots the data in the NUV, as it is for the respective single-component case ANI-D. This can be explained looking at the recovered value of dynamical ejected mass, which exceeds theoretical expectations estimated from NR simulations Perego et al. (2019); Endrizzi et al. (2020); Nedora et al. (2019); Bernuzzi et al. (2020); Nedora et al. (2021)(see Sec. 4.3.4). Similar considerations hold for the anisotropic three-component case ANI-DVN. However, the uncertainties on the estimated LCs for this model are narrower with respect to the ones obtained from the ANI-DV, corresponding to an improvement in the capability of constraining the measurement. The main improvement of the three-component ANI-DVN model over the two-component ANI-DV model lies in its ability to better fit early-times data due to the inclusion of a third component. Figure 2: Apparent magnitudes computed using the maximum-likelihood parameter for each considered model; ISO-D in blue, ISO-DV in yellow, ANI-D in green, ANI-DV in red, ANI-VN in purple and ANI-DVN in brown. The different panels refer to different photometric bands, respectively $B$, $g$, $r$, $z$, $i$ and $Ks$. The black squares are the observed data of AT2017gfo for the corresponding photometric band with the respective standard deviations. Figure 3: Deviations from the maximum-likelihood template of the LCs computed from the whole set of posterior samples. The solid lines represent the median values and the shadowed areas are the 90% credible regions. Different color refers to a different model; respectively, ISO-D in blue, ISO-DV in yellow, ANI-D in green, ANI-DV in red, ANI-VN in purple and ANI-DVN in brown. The different panels show different photometric bands, respectively $B$, $g$, $r$, $z$, $i$ and $Ks$. ### 4.2 Evidences Table 2: Estimated log-evidences for the analyzed kNe models. The reported uncertainties refers to the standard deviations estimated according to Ref. Skilling (2006). Profile | Components | $\log p(d|{\rm Model})$ ---|---|--- ISO | D | $-23510\pm 1$ ISO | D+V | $-19719\pm 1$ ANI | D | $-9920\pm 1$ ANI | N+V | $-11103\pm 1$ ANI | D+V | $-9556\pm 1$ ANI | D+N+V | $-9439\pm 1$ The logarithmic evidences estimated for the considered models are shown in Tab. 2. The evidence increases with the number of models’ components. This is consistent with the hierarchy observed in the LC residuals, and the better match to the data for multi-component models. The only exception is the ANI-NV case, for which the features of the data at late times are not well captured due to the absence of a fast equatorial component. Furthermore, for a fixed number of components, the anisotropic geometries are always favored with respect to isotropic geometries, with a $\log\mathcal{B}_{\rm ISO}^{\rm ANI}$ of the order of $10^{4}$. The preferred model among the considered cases is the anisotropic three-component, in agreement with previous findings, e.g. Cowperthwaite et al. (2017); Perego et al. (2017a); Villar et al. (2017b). ### 4.3 Posterior Distributions Table 3: Recovered values from the posterior distributions of the of the intrinsic ejecta parameters. The reported quantities are the means with the 90% credible regions. The conventions $\gtrsim$, $\lesssim$ denote marginalized posterior distributions constrained respectively around the upper and the lower prior bounds. We remark that $\kappa_{\rm low}$ and $\kappa_{\rm high}$ refer respectively to the gray opacity parameters for low and high latitudes. Model | Dynamical ejecta | Viscous ejecta | $\nu$-driven wind ---|---|---|--- | ${m}$ | ${v_{\rm rms}}$ | $\kappa_{\rm high}$ | $\kappa_{\rm low}$ | ${m}$ | ${v_{\rm rms}}$ | $\kappa$ | ${m}$ | ${v_{\rm rms}}$ | $\kappa$ | $\left[10^{-2}{\rm M_{\odot}}\right]$ | $[c]$ | $\left[{\rm cm}^{2}\,{\rm g}^{-1}\right]$ | $\left[10^{-2}{\rm M_{\odot}}\right]$ | $[c]$ | $\left[{\rm cm}^{2}\,{\rm g}^{-1}\right]$ | $\left[10^{-2}{\rm M_{\odot}}\right]$ | $[c]$ | $\left[{\rm cm}^{2}\,{\rm g}^{-1}\right]$ ISO-D | $0.787^{+0.016}_{-0.017}$ | $0.1758^{+0.0007}_{-0.0008}$ | $6.14^{+0.11}_{-0.10}$ | – | – | – | – | – | – | – ISO-DV | $1.139^{+0.048}_{-0.044}$ | $0.213^{+0.003}_{-0.003}$ | $4.13^{+0.08}_{-0.09}$ | ${\lesssim}1$ | ${\gtrsim}0.1$ | $4.99^{+0.12}_{-0.11}$ | – | – | – ANI-D | $0.807^{+0.022}_{-0.018}$ | $0.236^{+0.001}_{-0.002}$ | ${\lesssim}0.1$ | ${\gtrsim}30$ | – | – | – | – | – | – | – ANI-DV | $1.231^{+0.041}_{-0.048}$ | $0.233^{+0.002}_{-0.002}$ | ${\lesssim}0.1$ | $12.3^{+0.6}_{-0.5}$ | ${\lesssim}1$ | $0.0276^{+0.0007}_{-0.0006}$ | $2.23^{+0.05}_{-0.05}$ | – | – | – ANI-VN | – | – | – | – | ${\lesssim}1$ | $0.0064^{+0.0001}_{-0.0001}$ | $0.45^{+0.01}_{-0.01}$ | $\gtrsim 0.75$ | $0.0998^{+0.0003}_{-0.0008}$ | $1.002^{+0.006}_{-0.002}$ ANI-DVN | $1.378^{+0.063}_{-0.071}$ | $0.233^{+0.002}_{-0.002}$ | ${\lesssim}0.1$ | $11.1^{+0.7}_{-0.6}$ | ${\lesssim}1$ | $0.0318^{+0.0008}_{-0.0008}$ | $2.96^{+0.07}_{-0.09}$ | $0.247^{+0.025}_{-0.061}$ | $0.0502^{+0.0006}_{-0.0002}$ | $2.29^{+0.14}_{-0.09}$ Table 4: Recovered values from the posterior distributions of the of the global intrinsic parameters and of the extrinsic parameters. The reported quantities are the means with the 90% credible regions. The conventions $\gtrsim$, $\lesssim$ denote marginalized posterior distributions constrained respectively around the upper and the lower prior bounds. Model | ${T^{\rm Ni}_{\rm floor}}$ | ${T^{\rm LA}_{\rm floor}}$ | $\epsilon_{0}$ | $\iota$ | $D_{L}$ ---|---|---|---|---|--- | $\left[{\rm K}\right]$ | $\left[{\rm K}\right]$ | $\left[10^{18}{\rm erg}\,{\rm g}^{-1}\,{\rm s}^{-1}\right]$ | $\left[{\rm deg}\right]$ | $\left[{\rm Mpc}\right]$ ISO-D | ${4335}^{+3157}_{-3427}$ | ${2484}^{+450}_{-410}$ | ${66.5}^{+1.5}_{-1.4}$ | ${33}^{+27}_{-25}$ | ${\gtrsim}50$ ISO-DV | ${6740}^{+778}_{-612}$ | ${1126}^{+243}_{-311}$ | ${21.21}^{+0.05}_{-0.05}$ | ${34}^{+24}_{-26}$ | ${48.5}^{+0.3}_{-0.4}$ ANI-D | ${5064}^{+47}_{-50}$ | ${746}^{+219}_{-223}$ | ${161}^{+3}_{-5}$ | ${43.9}^{+0.5}_{-0.5}$ | ${\gtrsim}50$ ANI-DV | ${5031}^{+105}_{-99}$ | ${704}^{+175}_{-180}$ | ${38.7}^{+0.9}_{-0.9}$ | ${43.9}^{+0.5}_{-0.5}$ | ${\gtrsim}50$ ANI-VN | ${3356}^{+56}_{-35}$ | ${\lesssim}{500}$ | ${8.5}^{+0.1}_{-0.1}$ | ${52}^{+1}_{-1}$ | ${22.6}^{+0.2}_{-0.2}$ ANI-DVN | ${5995}^{+105}_{-118}$ | ${\lesssim}{500}$ | ${30.4}^{+0.2}_{-0.1}$ | ${57}^{+1}_{-1}$ | ${\gtrsim}50$ In the following paragraphs, we discuss the properties of the posterior distributions for each model and their physical interpretation. Table 3 and Tab. 4 show the mean values of the parameters, and their 90% credible regions, extracted from the recovered posterior distributions. A general fact is that the marginalized posterior for the ejected mass of the viscous component is always constrained against the lower bound $10^{-2}~{}{\rm M_{\odot}}$, when this component is involved. Moreover, for the majority of the analyses, the distance parameter is biased towards larger values, inconsistently with the estimates from Ref. Abbott et al. (2017a, b), and the heating rate parameter $\epsilon_{0}$ is generally overestimated comparing with the estimates from nuclear calculations Korobkin et al. (2012); Barnes et al. (2016); Kasen & Barnes (2019); Barnes et al. (2020); Zhu et al. (2020). This behavior can be explained from Eqs. (2), (5) and (6): $D_{L}$ and $\epsilon_{0}$ are largely degenerate and both concur to determine the brightness of the observed LCs. Thus, the correlations between these parameters induce biases in the recovered values. The physical explanation of this effect can be motivated with the poor characterization of the model in the NIR bands: this lack of knowledge generates a fainter kN in this photometric region and, in order to match the observed data, the recovered heating rate are larger. Note that this bias concurs in the overestimation of the LC in the high-frequency bands (i.e. $U$, $B$ and $V$), where the number of measurements is lower with respect to the other employed bands. #### 4.3.1 ISO-D We start considering the simplest employed model, the isotropic one-component model labelled as ISO-D. Fig. 4 shows the marginalized posterior distribution in the $({m},{v_{\rm rms}})$ plane. The velocity is constrained around ${\sim}0.18\,c$ while the ejected mass lies around $8{\times}10^{-3}~{}{\rm M_{\odot}}$, both in agreement with the observational results recovered in Ref. Villar et al. (2017b); Cowperthwaite et al. (2017); Abbott et al. (2017d); Coughlin et al. (2018). Moreover, the opacity posterior peaks in proximity of $\kappa\sim 6~{}{\rm cm^{2}\,g^{-1}}$, consistently with Ref. Cowperthwaite et al. (2017). Regarding the extrinsic parameters, the posterior for the inclination angle $\iota$ is coincident with the imposed prior, since the employed profiles do not depend on this coordinate. The model is not able to constrain the value of ${T^{\rm Ni}_{\rm floor}}$, which returns a posterior identical to the prior, while ${T^{\rm LA}_{\rm floor}}$ is recovered around $2500~{}{\rm K}$. The obtained flat posterior distribution for the ${T^{\rm Ni}_{\rm floor}}$ parameter highlights the unsuitability of this model in capturing the features of the observed data. #### 4.3.2 ANI-D For the anistropic single-component model ANI-D, the value of the ejected mass agrees with the one coming from the ISO-D case. However, in order to fit the data, ANI-D requires a larger velocity, ${\sim}0.23\,c$, as shown in Fig. 4. The high-latitude opacity is constrained around the lower bound $0.1~{}{\rm cm^{2}\,g^{-1}}$ while the low-latitude contribution exceeds above $30~{}{\rm cm^{2}\,g^{-1}}$, that largely differs from the respective isotropic case, ISO-D. In practice, that is due to the lack of ejected mass that is balanced with a more opaque environment. Nevertheless, according to the estimated evidences, this model is preferred with respect to the isotropic case. The reason is clear from Fig. 2: the anisotropic model is able to characterize the late-times features of the data. The heating rate parameter $\epsilon_{0}$ is largely biased towards larger values with respect to the results of Ref. Korobkin et al. (2012), in order to compensate the lack of ejected matter. Indeed, a larger heating factor $\epsilon_{0}$ leads to brighter LCs, and this effect is capable to mimic an increase in the amount of ejected matter. The posterior distribution for viewing angle $\iota$ peaks around 44 degrees, inconsistently with the estimations coming from the GRB analysis Abbott et al. (2017c); Savchenko et al. (2017); Ghirlanda et al. (2019). Moreover, unlike the ISO-D case, both temperature parameters ${T^{\rm Ni}_{\rm floor}}$ and ${T^{\rm LA}_{\rm floor}}$ are well constrained for the ANI-D analysis: these parameters affect mostly the late-times model, modifying the slope of the recovered LCs. Thus, these terms are responsible for the improvement in the fitted LCs. #### 4.3.3 ISO-DV Figure 5 shows the posterior distribution for some exemplary intrinsic ejecta parameters. For both components, the individual most-likely value for ejected mass parameter lies around ${\sim 10^{-2}~{}{\rm M_{\odot}}}$, in agreement with the measurement presented in Ref. Abbott et al. (2017d). This range of values is slightly overestimating the expectations coming from NR simulations for the dynamical component Perego et al. (2019); Nedora et al. (2019); Endrizzi et al. (2020); Nedora et al. (2021); Bernuzzi et al. (2020). This could be explained by considering the effect of the spiral-wave wind Nedora et al. (2019), that constitute a massive and fast ejecta on timescales of $10-100$ ms. The spiral-wave wind is not considered as components in our models because it would be highly degenerate with the dynamical ejecta. The recovered opacity parameters are roughly $4{-}5~{}{\rm cm^{2}\,g^{-1}}$. The velocity of the dynamical component is greater than secular velocity, accordingly with the theoretical expectations. Comparing with other fitting models, the recovered ejected masses ${m}^{\rm(D)}$ result smaller with respect to the analogous analysis of Ref. Villar et al. (2017b), while the results roughly agree with the estimations coming from Ref. Coughlin et al. (2018). However, it is not possible to perform an apple-to-apple comparison between these results, due to the systematic differences in modeling between the semi-analytical model (used in this work) and the radiative-transport methods employed in Ref. Villar et al. (2017b); Coughlin et al. (2018). The temperature parameters, ${T^{\rm Ni}_{\rm floor}}$ and ${T^{\rm LA}_{\rm floor}}$, are much more constrained comparing with the respective isotropic single component case ISO-D, and this is reflected in the improvement of fitting the different trends of the data in the high-frequency bands. The marginalized posterior distribution of the inclination angle is coincident with the prior, according with the isotropic description. Furthermore, the biases on the distance $D_{L}$ and the heating parameter $\epsilon_{0}$ are reduced with respect to the ISO-D, since two-component case accounts for a larger amount of total ejected mass. Indeed, increasing the number of ejecta components other than the dynamical one, the overall kN becomes brighter since additional terms, becoming transparent at larger times, are included into the computation of the emitted flux. Then, $\epsilon_{0}$ tends towards lower values in order to compensate this effect and fit the data. According with the estimated evidences, the isotropic two-components ISO-DV model is disfavored with respect to the anisotropic single-component ANI-D. The main difficulty of ISO-DV is, again, to fit the data at late-times. Figure 4: Marginalized posterior distribution of ejected mass ${m}$ and velocity ${v_{\rm rms}}$ of dynamical component for the one-component studies, ISO-D and ANI-D. The anisotropic case requires larger velocities in order to fit the observed data. #### 4.3.4 ANI-DV The ANI-DV model is the second best fitting model to AT2017gfo among the considered cases. Fig. 5 shows the posterior distribution for some exemplary intrinsic parameters of the dynamical and the viscous components. The ejected mass value lies around ${\sim}10^{-2}~{}{\rm M_{\odot}}$, in agreement with previous estimates Abbott et al. (2017d). On the other hand, the recovered mass slightly overestimates the results coming from targeted NR simulations Perego et al. (2019); Nedora et al. (2019); Endrizzi et al. (2020); Nedora et al. (2021); Bernuzzi et al. (2020), similarly to ISO-DV (see Sec. 4.3.3). The velocity is well constrained around ${\sim}0.23\,c$. The recovered low- latitude opacity corresponds roughly to $12~{}{\rm cm^{2}\,g^{-1}}$ and high- latitude opacity is constrained around the lower bound, $0.1~{}{\rm cm^{2}\,g^{-1}}$. This result can be explained by considering that the mass of the dynamical component slightly overshoots the NR expectations Perego et al. (2019); Nedora et al. (2019); Endrizzi et al. (2020); Nedora et al. (2021); Bernuzzi et al. (2020) (of a factor ${\sim}1.25$), and by noticing that the ejected mass correlates with the luminosity distance and the heating factor (that are generally biased). This combination generates the overestimation of the data in the NUV region. In order to improve the fitting to the observed data, the model tries to compensate this effect and the high-latitude opacity tends to move towards lower values. Concerning the viscous component, its velocity results an order of magnitude smaller than the one of the dynamical ejecta, in agreement with the expectations. This enforce the hypothesis for which the viscous ejecta contributes mostly to the red kN. The posterior distribution of opacity parameter peaks around ${\sim}5~{}{\rm cm^{2}\,g^{-1}}$, denoting a medium opaque environment. Fig. 6 shows the posterior distribution for the extrinsic parameters. The temperatures ${T^{\rm Ni}_{\rm floor}}$ and ${T^{\rm LA}_{\rm floor}}$ are well constrained respectively around ${\sim}5000~{}{\rm K}$ and ${\sim}700~{}{\rm K}$. The agreement with Ref. Korobkin et al. (2012) on the estimation of the heating factor $\epsilon_{0}$ increases with respect to the ANI-D case, due to the inclusion of an additional component, similarly to what is discussed in Sec. 4.3.3. The posterior for inclination angle results similar to the ANI-D case, according with the fact that the viscous component, as we have defined it, does not introduce further information on the inclination. Figure 5: Marginalized posterior distribution for some exemplary ejecta intrinsic parameters extracted from the analysis of ISO-DV, ANI-DV and ANI- DVN. The reported parameters are the ejected mass $m^{\rm(D)}$, the velocity ${v_{\rm rms}}^{\rm(D)}$ and the low-latitude opacity $\kappa_{\rm low}^{\rm(D)}$ for the dynamical component, while for the viscous component, we report the ejected mass $m^{\rm(V)}$ and the opacity $\kappa_{\rm low}^{\rm(V)}$. For ISO-DV, the low-latitude opacity of the dynamical component is replaced with the overall opacity $\kappa^{\rm(D)}$, due to the different geometry. Figure 6: Marginalized posterior distribution for the global intrinsic parameters and the extrinsic parameters extracted from the analysis of ISO-DV, ANI-DV and ANI-DVN. The reported parameters are luminosity distance $D_{L}$, the viewing angle $\iota$, the floor temperatures ${T^{\rm LA}_{\rm floor}}$ and ${T^{\rm Ni}_{\rm floor}}$ and the logarithm of the heating factor $\epsilon_{0}$. For the ISO-DV case, the posterior distribution for the viewing angle $\iota$ coincides with prior due to the employed geometry. #### 4.3.5 ANI-VN According to Tab. 2, this ANI-VN is the least likely model among all anisotropic cases. As previously mentioned, the reason for this is clear from the LCs. The parameters of the viscous component are characterized by a slow velocity of ${\sim}6{\times}10^{-3}\,c$ and a low opacity environment, $\kappa\sim 0.5~{}{\rm cm^{2}\,g^{-1}}$. On the other hand, the neutrino- driven wind mass is overestimated compared with aftermath computations presented in Ref. Perego et al. (2017a), in order to compensate the lack of overall ejected mass due to the absence of a dynamical component. Moreover, the neutrino-driven wind is characterized by a realistic velocity of ${\sim}0.1\,c$, and by a low-opaque environment, $\kappa\sim 1~{}{\rm cm^{2}\,g^{-1}}$. Regarding the extrinsic parameters, the ANI-VN model is the case that gives the best agreement with Ref. Korobkin et al. (2012) in terms of heating factor. The distance, instead, is recovered around ${\sim}20~{}{\rm Mpc}$, underestimating the GW distance Abbott et al. (2017a). This result could be explained by the lower amount of total ejected mass and by the lower heating rate compared with the other cases (see Tab. 3): this lack generates fainter kN that biases the source to appears closer to the observer in order to fit the data. The ${T^{\rm Ni}_{\rm floor}}$ parameter takes lower values (${\sim}3300$ K) comparing with the ANI-DV case (${\sim}5000$ K), since the model has to fit the data employing a polar geometry (N) instead of an equatorial ejecta (D). The viewing angle is biased toward larger values, roughly ${\sim}50$ deg, inconsistent with GRB expectations Abbott et al. (2017c); Savchenko et al. (2017). #### 4.3.6 ANI-DVN This is the model that gives the largest evidence, within the provided prior bounds. Regarding the dynamical and viscous ejecta components, the general features are similar to the one of the ANI-DV case. The dynamical ejected mass is slightly overestimated comparing with NR simulations Perego et al. (2019); Nedora et al. (2019); Endrizzi et al. (2020); Nedora et al. (2021); Bernuzzi et al. (2020) of a factor ${\sim}2$. The dynamical component is described by a low opacity environment for high-latitudes ($\kappa_{\rm high}\sim 0.1~{}{\rm cm^{2}\,g^{-1}}$) and high opacity for low-latitudes ($\kappa_{\rm low}\sim 11~{}{\rm cm^{2}\,g^{-1}}$), in agreement with NR simulations Perego et al. (2019); Nedora et al. (2019); Endrizzi et al. (2020); Nedora et al. (2021); Bernuzzi et al. (2020). These results approximately agree also with other observational estimations (e.g., Villar et al., 2017b; Cowperthwaite et al., 2017; Abbott et al., 2017d; Coughlin et al., 2018) Furthermore, the ‘D’ component results into the fasted ejected shell, validating the interpretation that this contribution is generated at dynamic time-scales. On the other hand, the viscous ejecta is characterized by an average opacity ${\sim}3~{}{\rm cm^{2}\,g^{-1}}$ and by low velocity ${\sim}3{\times}10^{-3}\,c$, an order of magnitude smaller then the one of the dynamical ejecta. These results agree with the studies presented in Ref. Radice et al. (2018c) and they contribute to the LCs in the optical band. Regarding the neutrino-driven wind, the posterior distribution for its ejected mass ${m}^{\rm(N)}$ shows a bimodality and this degeneracy correlates with the heating rate parameter $\epsilon_{0}$. This behavior can be seen in Fig. (7), that shows the marginalized posterior distribution for $\epsilon_{0}$ and for the total ejected mass ${M_{\rm ej}}$, defined as ${M_{\rm ej}}=\sum_{k={\rm D,N,V}}{m}^{(k)}\,,$ (14) where the index $k$ runs over all the involved components. The marginalized posterior distribution for ${m}^{\rm(N)}$ has its dominat peak in proximity of $2.5{\times}10^{-3}~{}{\rm M_{\odot}}$, while the secondary mode is located slightly below $2{\times}10^{-3}~{}{\rm M_{\odot}}$. Despite the bimodality, the recovered values of ${m}^{\rm(N)}$ are smaller compared with the same parameter extracted from the ANI-VN analysis. These results are largely consistent with aftermath computations Perego et al. (2014) and with theoretical expectations Perego et al. (2017a), as it is for the recovered velocity and opacity parameters. Furthermore, also for the ANI-DVN case, the viewing angle is biased toward larger values, roughly ${\sim}60$ deg. The same trend is shown by the anisotropic three-component model employed in Ref. Villar et al. (2017b). The posterior distribution for the ${T^{\rm Ni}_{\rm floor}}$ parameter peaks around ${\sim}6000~{}{\rm K}$, while, the temperature ${T^{\rm LA}_{\rm floor}}$ is constrained around the lower bound, $500~{}{\rm K}$. Figure 7: Marginalized posterior distribution of heating parameter $\epsilon_{0}$ and total ejected mass ${M_{\rm ej}}$ for three selected cases: ISO-DV (blue), ANI-DV (yellow) and ANI-DVN (green). The heating parameter $\epsilon_{0}$ is plotted using the logarithm to base 10 in order to evince the recovered orders of magnitude. The total mass ${M_{\rm ej}}$ is computed extending the sum to all the involved components. ## 5 EOS Inference The combination of gravitational and electromagnetic signals coming from the same compact binary merger allows the possibility to constrain more tightly the intrinsic properties of the system and the nuclear EOS, in the context of both BNS (e.g., Radice & Dai (2019); Radice et al. (2018b)) and black hole-NS mergers (e.g., Barbieri et al. (2019)). In this section, we apply the information coming from NR fitting formulae Nedora et al. (2021, 2020) to the posterior distribution of the preferred kN model (ANI-DVN), in order to infer the mass ratio and the reduced tidal parameter of the BNS source. Subsequently, we combine the kN and GW results to derive constraints on the radius ${R_{1.4}}$ of an irrotational NS of 1.4 ${\rm M_{\odot}}$. ### 5.1 Mass ratio and reduced tidal parameter A BNS is characterized by the masses of the two objects, $m_{1}$ and $m_{2}$, and by the tidal quadrupolar polarizability coefficients, $\Lambda_{i}=\frac{2}{3}\,k_{2,i}C_{i}^{-5}\,,$ (15) where $k_{2,i}$ is the quadrupolar Love number, $C_{i}=Gm_{i}/(R_{i}c^{2})$ the compactness of star, $G$ the gravitational constant, $R_{i}$ the radius of the star and $i=1,2$. Furthermore, we introduce the mass ratio $q=m_{1}/m_{2}\geq 1$ and the reduced tidal parameter ${\tilde{\Lambda}}$ as: ${\tilde{\Lambda}}=\frac{16}{13}\,\frac{(q+12)q^{4}\Lambda_{1}+(1+12q)\Lambda_{2}}{(1+q)^{5}}\,.$ (16) The NR fits presented in Ref. Nedora et al. (2020) use simulations targeted to GW170817 Perego et al. (2019); Endrizzi et al. (2020); Nedora et al. (2019); Bernuzzi et al. (2020); Nedora et al. (2021) and give the mass ${m}^{\rm(D)}$ and velocity ${v_{\rm rms}}^{\rm(D)}$ of the dynamical ejecta as functions of the BNS parameters $(q,{\tilde{\Lambda}})$. In order to recover the posterior distribution of the latter, we adopt a resampling method, similar to the procedure presented in Ref. Coughlin et al. (2017); Coughlin et al. (2018): a sample $(q,{\tilde{\Lambda}})$ is extracted from the prior distribution 222The prior distribution is taken uniformly distributed in the tidal parameters ${\tilde{\Lambda}}$; while, regarding the mass ratio $q$, we employ a prior distribution uniform in the mass components, that corresponds to a probability density proportional to $[(1+q)/q^{3}]^{2/5}$, analogously to GW analyses Abbott et al. (2017a); Gamba et al. (2020a)., exploiting the ranges $q\in[1,2]$ and ${\tilde{\Lambda}}\in[0,5000]$. Subsequently, the tuple $(q,{\tilde{\Lambda}})$ is mapped into the dynamical ejecta parameters $({m}^{\rm(D)},{v_{\rm rms}}^{\rm(D)})$ using the NR formulae presented in Ref. Nedora et al. (2020). The likelihood is estimated in the dynamical ejecta parameter space using a kernel density estimation of the marginalized posterior distribution recovered from the preferred model (ANI-DVN). Furthermore, since NR relations have non-negligible uncertainties, we introduce calibration parameters $\alpha_{1},\alpha_{2}$, such that $\begin{split}\log_{10}{m}^{\rm(D)}&=(1+\alpha_{1})\cdot\log_{10}{m}^{\rm(D)}_{\rm fit}(q,{\tilde{\Lambda}})\,,\\\ {v_{\rm rms}}^{\rm(D)}&=(1+\alpha_{2})\cdot{v_{\rm rms}}^{\rm(D)}_{\rm fit}(q,{\tilde{\Lambda}})\,.\\\ \end{split}$ (17) The calibrations parameters $\alpha_{1,2}$ are sampled along the other parameters using a normally distributed prior with vanishing means and standard deviations prescribed by the relative uncertainties of NR fits equal to $0.2$ for both. The resampled posterior distribution is marginalized over the calibration parameters. The BNS parameter space is explored using a Metropolis-Hasting technique. Note that a correct characterization of the fit uncertainty is crucial, since this contribution is the largest source of error in the inference of $(q,{\tilde{\Lambda}})$. Figure 8: Posterior distribution in the $({\tilde{\Lambda}},q)$ plane. The blue solid lines refer to the resampled values extracted from the kN analysis (ANI-DVN). The orange solid lines refer to the GW results, where the samples have been reweighted over a flat prior in ${\tilde{\Lambda}}$. The green solid lines are the combined inference. The contours represent the 90% credible regions The plot shows the expectations of some representative EOS. The posterior distribution in the $(q,{\tilde{\Lambda}})$ plane as obtained from the dynamical ejecta properties fitted to AT2017gfo data is shown in Fig. 8. The measurement of the tidal parameter leads to ${\tilde{\Lambda}}=900^{+310}_{-780}$, with a bimodality in the marginalized posterior distribution, due to the quadratic nature of the employed NR formulae, with modes ${\tilde{\Lambda}}\sim 370$ and ${\tilde{\Lambda}}\sim 1000$. The mass ratio is constrained to be lower than $1.54$ at the 90% confidence level. The uncertainties of these estimations are larger than those of the GW analyses Abbott et al. (2017a, 2019a); Gamba et al. (2020a) and the principal source of error is the uncertainty of the NR fit formulae. Fig. 8 shows also the results coming from the GW170817 analysis extracted from Ref. Gamba et al. (2020a). For this analysis, the data correspond to the LIGO- Virgo strains Abbott et al. (2017a, 2019a, 2019b) centered around GPS time 1187008882 with sampling rate of 4096 Hz and duration of 128 s. The parameter estimation has been performed with the nested sampling provided by the pbilby pipeline Ashton et al. (2019); Smith et al. (2020) employing the effective- one-body waveform approximant TEOBResumSPA Nagar et al. (2018); Gamba et al. (2020a) and analyzing the frequency range from 23 Hz to to 1024 Hz 333This choice minimizes waveform systematics Gamba et al. (2020b). On the other hand, it implies slightly larger statistical uncertainties on the reduced tidal parameters. Hence, our results are more conservative than previous multimessenger analyses in the treatment of uncertainties of GW data.. Furthermore, the GW posterior samples have been reweighted with a rejection sampling to the prior distributions employed in the kN study, in order to use the same prior information for both analyzes 444The prior distribution for the tidal parameters employed in Ref. Gamba et al. (2020a) is uniform in the tidal components $\Lambda_{1,2}$; while, in our study, we used a uniform prior in $\tilde{\Lambda}$.. Under the assumption that GW170817 and AT2017gfo are generated by the same physical event, the $(q,{\tilde{\Lambda}})$ posterior distributions coming from the two independent analyses can be combined, in order to constrain the estimation of the inferred quantities. The joint probability distribution is computed as the product of the single terms, $p\big{(}q,{\tilde{\Lambda}}\big{|}d_{\rm kn},d_{\rm gw}\big{)}=p\big{(}q,{\tilde{\Lambda}}\big{|}d_{\rm kn}\big{)}\cdot p\big{(}q,{\tilde{\Lambda}}\big{|}d_{\rm gw}\big{)}\,,$ (18) and the samples are extracted with a rejection sampling. The combined inference, shown in Fig. 8, leads to a constraint on the mass ratio of ${\lesssim}1.27$ and on the tidal parameter ${\tilde{\Lambda}}=460^{+210}_{-190}$, at the 90% confidence levels. Imposing these bounds, stiff nuclear EOS, such as DD2, are disfavored. ### 5.2 Neutron-star radius Table 5: Estimated values of mass ratio $q$, reduced tidal parameter ${\tilde{\Lambda}}$ and NS radius ${R_{1.4}}$ measured from the analyses of AT2017gfo and GW170817. The ${R_{1.4}}$ are estimated using the relation proposed in Ref. De et al. (2018); Radice & Dai (2019) and employing the chirp mass posterior distribution coming from the GW analysis Gamba et al. (2020a). Data | $q$ | ${\tilde{\Lambda}}$ | ${R_{1.4}}$ ---|---|---|--- | | | [km] AT2017gfo | ${\leq}1.54$ | $900^{+310}_{-780}$ | $13.46^{+0.93}_{-3.82}$ GW170817 | ${\leq}1.33$ | $510^{+350}_{-320}$ | $12.33^{+1.22}_{-1.85}$ Combined | ${\leq}1.27$ | $460^{+210}_{-190}$ | $12.16^{+0.89}_{-1.11}$ Using the universal relation presented in Ref. De et al. (2018); Radice & Dai (2019), it is possible to impose a constraint on the radius ${R_{1.4}}$ of a NS of $1.4~{}{\rm M_{\odot}}$. We employ the marginalized posterior distribution for the (source-frame) chirp mass $\mathcal{M}=(m_{1}m_{2})^{3/5}/(m_{1}+m_{2})^{1/5}$ coming from the GW170817 measurement Gamba et al. (2020a) and the posterior on the tidal parameter ${\tilde{\Lambda}}$ obtained with the joint analyses AT2017gfo+GW170817. We adopt a resampling technique to account for the uncertainties in the universal relation, introducing a Gaussian calibration coefficient with variance prescribed by Ref. De et al. (2018); Radice & Dai (2019). We estimate ${R_{1.4}}=12.16^{+0.89}_{-1.11}~{}{\rm km}$. The presented measurement agrees with the results coming from literature Annala et al. (2018); De et al. (2018); Radice & Dai (2019); Coughlin et al. (2019); Abbott et al. (2018b); Raaijmakers et al. (2020); Capano et al. (2020); Essick et al. (2020); Dietrich et al. (2020) and its overall error at $1\sigma$ level corresponds roughly to $500~{}{\rm m}$. In Fig. 9, the ${R_{1.4}}$ estimation is compared with the mass-radius curves from a sample of nuclear EOS. Our bounds impose observational constraints on the nuclear EOS, excluding both very stiff EOS, such as DD2, BHB$\Lambda\phi$ and MS1b, and very soft equations, such as 2B. Figure 9: Posterior distribution of the radius ${R_{1.4}}$ estimated with the joined inference of AT2017gfo and GW170817 plotted on top of the mass-radius relations coming from a sample of nuclear EOS (dashed lines). The blue solid line is computed using the mass and velocity information of the dynamical component, the orange solid curve takes into account also the contribution of the electron fraction and the green solid line is the result with the additional inclusion of the disk mass information. ### 5.3 Incorporating information from electron fraction and disk mass We conduct two further analyses, in order to show that the contribution of additional NR information can improve the previous estimation. In the first case, we take into account the contribution of the electron fraction; while, in the second, we include the information on the disk mass. These studies are discussed in the following paragraphs and they are intended to represent proofs-of-principle analyses, since they involve extra assumptions on the ejecta parameters and their relation with the EOS properties. A more accurate mapping between these quantities will be discussed in a further study. #### 5.3.1 Electron fraction Figure 10: Posterior distribution in the $({\tilde{\Lambda}},q)$ plane, analogously to Fig. 8, including the contribution of the electron fraction $Y_{e}$. From NR simulations, it is possible to estimate the average electron fraction, $Y_{e}$, of the dynamical ejecta Nedora et al. (2021, 2020). This quantity is the ratio of the net number of electrons to the numer of baryons and it is strictly related with the opacity of the shell Lippuner & Roberts (2015); Miller et al. (2019); Perego et al. (2019), since it mostly determines the nucleosynthesis yields in low entropy, neutron-rich matter. We compute the average opacity $\bar{\kappa}$ of a shell as the integral of the opacity over the polar angle weighted on the mass distribution, $\bar{\kappa}=\frac{1}{{m}}\int_{0}^{\pi}\varrho(\theta)\,\kappa(\theta)\,\sin\theta\,{\rm d}\theta\,.$ (19) Imposing the assumptions on the profiles of the dynamical ejecta, we get $\bar{\kappa}^{\rm(D)}=\left(\frac{1}{2}+\frac{1}{\pi}\right)\kappa_{\rm low}^{\rm(D)}+\left(\frac{1}{2}-\frac{1}{\pi}\right)\kappa_{\rm high}^{\rm(D)}\,.$ (20) Thanks to this definition, it is possible to map the opacity $\bar{\kappa}$ into the electron fraction $Y_{e}$, using the relation presented in Ref. Tanaka et al. (2020). Subsequently, the $Y_{e}$ can be related with the BNS parameters $(q,{\tilde{\Lambda}})$, using NR fit formulae Nedora et al. (2020). We introduce an additional calibration parameter $\alpha_{3}$, such that $Y_{e}=(1+\alpha_{3})\cdot{Y_{e}}^{\rm fit}(q,{\tilde{\Lambda}})\,,$ (21) with a Gaussian prior with mean zero and standard deviation of $0.2$. In this way it is possible to take into account also the contribution of the opacity posterior distribution, introducing additional constraints on the inference of the NS matter. The results are shown in Fig. 10. This further contribution has a strong effect on the mass ratio, constraining it to be ${\lesssim}1.26$. This effect is motivated by the fact that high-mass-ratio BNS mergers are expected to have $Y_{e}\lesssim 0.1$ Bernuzzi et al. (2020); Nedora et al. (2021). The recovered electron fraction correspond to $Y_{e}=0.20^{+0.03}_{-0.05}$. Regarding the tidal parameter, the $Y_{e}$ information affects the importance of the modes, improving the agreement with GW estimations Abbott et al. (2017a, 2019a); Gamba et al. (2020a), and it reduces the support of the posterior distribution, leading to an estimation of ${\tilde{\Lambda}}=480^{+550}_{-220}$. Combining kN and GW posterior distribution, we estimate an upper bound on the mass ratio of $1.20$ and a tidal parameter ${\tilde{\Lambda}}=465^{+175}_{-130}$, that corresponds to ${R_{1.4}}=12.14^{+0.75}_{-0.73}~{}{\rm km}$, at the 90% confidence level. #### 5.3.2 Disk mass Figure 11: Posterior distribution in the $({\tilde{\Lambda}},q)$ plane, analogously to Fig. 8, including the contributions of electron fraction $Y_{e}$ and disk mass $M_{\rm disk}$. The employed kN model contains information also on the baryonic wind ejecta. These components are expected to be generated by the disk that surrounds the remnant Kasen et al. (2015); Metzger & Fernández (2014); Just et al. (2015), if present. The disk mass can be estimated from NR simulations as function of the BNS parameters $(q,{\tilde{\Lambda}})$, albeit with large uncertainties Radice et al. (2018b); Radice et al. (2018d); Nedora et al. (2020). We map a fraction $\xi$ of the disk mass $M_{\rm disk}$ into the mass of the baryonic wind components, ${m}^{\rm(V)}+{m}^{\rm(N)}=\xi\cdot M_{\rm disk}\,.$ (22) The mass fraction $\xi$ is sampled along the other parameters with a uniform prior in the range $[0.1,0.5]$. We include the disk mass information together with the electron fraction contribution, previously discussed. The results are shown in Fig. 11. The disk mass contribution slightly reinforces the constraint on the mass ratio posterior, giving the 90% confidence level for $q=1.18$. The distribution of the tidal parameter ${\tilde{\Lambda}}$ is sparser with respect to the case discussed in Sec. 5.3.1, due to the correlations induced by the $M_{\rm disk}$ formula. The electron fraction results $Y_{e}={0.20}^{+0.04}_{-0.08}$; while, the mass fraction corresponds to $\xi={0.14}^{+0.27}_{-0.04}$. The joined inference with the GW posterior leads to a mass ratio $\lesssim 1.13$ and a tidal parameter of ${\tilde{\Lambda}}=430^{+180}_{-140}$, at the 90% confidence. This result can be translated in a radius of ${R_{1.4}}=11.99^{+0.82}_{-0.85}~{}{\rm km}$. ## 6 Conclusion In this paper, we have performed informative model selection on kN observations within a Bayesian framework applied to the case of AT2017gfo, the kN associated with the BNS merger GW170817. We have then combined the posteriors obtained from the kN observation with the ones extracted from the GW signal and with NR-based fitting formulae on the ejecta and remnant properties to set tight constraints on the NS radius and EOS. From the analysis of AT2017gfo, the anisotropic description of the ejecta components is strongly preferred with respect to isotropic profiles, with a logarithmic Bayes’ factor of the order of ${\sim}10^{4}$. Moreover, the favored model is the three-component kN constituted by a fast dynamical ejecta (comprising both a red-equatorial and a blue-polar portion), a slow isotropic shell and a polar wind. For the best model, the dynamical ejected mass overestimates of a factor two the theoretical expectation coming from NR simulations Perego et al. (2019); Nedora et al. (2019); Endrizzi et al. (2020); Nedora et al. (2021); Bernuzzi et al. (2020). These biases can be explained by considering the effect of the spiral-wave wind Nedora et al. (2019) and taking into account the correlations between the extrinsic parameters. The recovered velocity of the dynamical component agrees with NR simulations Perego et al. (2019); Nedora et al. (2019); Endrizzi et al. (2020); Nedora et al. (2021); Bernuzzi et al. (2020), reinforcing the interpretation of this ejecta component. The intrinsic properties of the dynamical ejecta component are in agreement with previous results Villar et al. (2017b); Coughlin et al. (2019). Regarding the secular winds, the neutrino-driven mass and velocity are compatible with the calculations of Ref. Perego et al. (2014); Perego et al. (2017a). The viscous component is the slowest contribution and is broadly compatible with the estimates of Ref. Radice et al. (2018c), that are inferred from NR and other disc simulations. The viewing angle resulting from the preferred kN model is larger than the one deduced from independent analysis Abbott et al. (2017c); Savchenko et al. (2017); Ghirlanda et al. (2019), and also different from the one obtained by previous application of the same kN model Perego et al. (2017a). In the latter case, and differently from the present analysis, the profile of the viscous ejecta was assumed to be mostly distributed across the equatorial angle. This discrepancy confirms the non-trivial dependence of the light curves from the ejecta geometry and distributions. Under a modeling perspective, current kN description contains large theoretical uncertainties, such as thermalization effects, heating rates and energy-dependent photon opacities, e.g. Zhu et al. (2020). These effects propagate into systematic biases in the global parameters of the model, as shown in the posterior distributions for luminosity distance $D_{L}$ and heating rate parameter $\epsilon_{0}$. Hence, the development and the improvements of kN templates is an urgent task in order to conduct reliable and robust analyses in the future. Figure 12: Summary plot of the current estimations of ${R_{1.4}}$. The reported values are the means and the 90% credible regions extracted from Refs. Annala et al. (2018); Radice et al. (2018b); De et al. (2018); Radice & Dai (2019); Coughlin et al. (2019); Abbott et al. (2018b); Raaijmakers et al. (2020); Capano et al. (2020); Jiang et al. (2020); Essick et al. (2020); Dietrich et al. (2020). The dashed line and the shadowed area are respectively the average over all the current estimations and the respective 90% credible region, corresponding to ${R_{1.4}}={12.0}^{+1.2}_{-1.2}$ km. We use of the preferred kN model to constrain the properties of the progenitor BNS and the EOS of dense, cold matter. Combining the kN measurement with the information coming from NR simulations, the ejecta properties are mapped in terms of mass ratio and reduced tidal deformability of the binary progenitor. Subsequently, this information is combined with the measurements of the GW data. The joint kN+GW analysis constrains the reduced tidal parameter to ${\tilde{\Lambda}}=460^{+210}_{-190}$ and the mass ratio of the BNS system to be lower than $1.27$, at the 90% credible level. Furthermore, the joint analysis predicts a radius for a NS of $1.4~{}{\rm M_{\odot}}$ approximately of ${R_{1.4}}\approx 12.2~{}{\rm km}$ with an uncertainty of ${\sim}500~{}{\rm m}$ at one-$\sigma$ level. The ${R_{1.4}}$ estimation can be further improved including additional physical information extracted from the kN model in the inferred model, such as the electron fraction of the dynamical ejecta and the mass of the disk around the merger remnant. Figure 12 summarizes ours and the current estimations of ${R_{1.4}}$ extracted from literature Annala et al. (2018); Radice et al. (2018b); De et al. (2018); Radice & Dai (2019); Coughlin et al. (2019); Abbott et al. (2018b); Raaijmakers et al. (2020); Capano et al. (2020); Jiang et al. (2020); Essick et al. (2020); Dietrich et al. (2020). In addition to the kN modeling uncertainties discussed above, another source of error of our estimates is the accuracy of the NR formulae. The relations employed here used exclusively targeted data and simulations with state-of-art treatment of microphysical EOS and neutrino treatment Perego et al. (2019); Nedora et al. (2019); Endrizzi et al. (2020); Nedora et al. (2021); Bernuzzi et al. (2020). However, the simulation sample is limited to about hundrends of simulations, with fitting errors that could be reduced by considering data at even higher grid resolutions Nedora et al. (2020). For example, assuming all the fit formulae to be exact (i.e. removing all calibration terms), it will be possible to infer the ${\tilde{\Lambda}}$ parameter from a kN observation with an accuracy of the order of 10, that corresponds to a constraint on the radius ${R_{1.4}}$ of roughly $100~{}{\rm m}$. ## Acknowledgements M.B. and S.B. acknowledges support by the European Union’s H2020 under ERC Starting Grant, grant agreement no. BinGraSp-714626. D.R. acknowledges support from the U.S. Department of Energy, Office of Science, Division of Nuclear Physics under Award Number(s) DE-SC0021177 and from the National Science Foundation under Grant No. PHY-2011725. The computational experiments were performed on the ARA cluster at Friedrich Schiller University Jena supported in part by DFG grants INST 275/334-1 FUGG and INST 275/363-1 FUGG, and ERC Starting Grant, grant agreement no. BinGraSp-714626. Data postprocessing was performed on the Virgo “Tullio” server at Torino supported by INFN. This research has made use of data, software and/or web tools obtained from the Gravitational Wave Open Science Center (https://www.gw-openscience.org), a service of LIGO Laboratory, the LIGO Scientific Collaboration and the Virgo Collaboration. LIGO Laboratory and Advanced LIGO are funded by the United States National Science Foundation (NSF) as well as the Science and Technology Facilities Council (STFC) of the United Kingdom, the Max-Planck-Society (MPS), and the State of Niedersachsen/Germany for support of the construction of Advanced LIGO and construction and operation of the GEO600 detector. Additional support for Advanced LIGO was provided by the Australian Research Council. Virgo is funded, through the European Gravitational Observatory (EGO), by the French Centre National de Recherche Scientifique (CNRS), the Italian Istituto Nazionale della Fisica Nucleare (INFN) and the Dutch Nikhef, with contributions by institutions from Belgium, Germany, Greece, Hungary, Ireland, Japan, Monaco, Poland, Portugal, Spain. ## Data availability The observational data underlying this article were provided by Villar et al. (2017b) under license. The posterior samples presented in this work will be shared on request to the corresponding author. ## References * Aasi et al. (2015) Aasi J., et al., 2015, Class. Quant. Grav., 32, 074001 * Abbott et al. (2017a) Abbott B. P., et al., 2017a, Phys. Rev. Lett., 119, 161101 * Abbott et al. (2017b) Abbott B. P., et al., 2017b, Astrophys. J., 848, L12 * Abbott et al. (2017c) Abbott B. P., et al., 2017c, Astrophys. J., 848, L13 * Abbott et al. (2017d) Abbott B. P., et al., 2017d, Astrophys. J., 850, L39 * Abbott et al. (2018a) Abbott B. P., et al., 2018a, Living Rev. Rel., 21, 3 * Abbott et al. (2018b) Abbott B. P., et al., 2018b, Phys. Rev. Lett., 121, 161101 * Abbott et al. (2019a) Abbott B. P., et al., 2019a, Phys. Rev., X9, 011001 * Abbott et al. (2019b) Abbott B. P., et al., 2019b, Phys. Rev., X9, 031040 * Acernese et al. (2015) Acernese F., et al., 2015, Class. Quant. Grav., 32, 024001 * Agathos et al. (2020) Agathos M., Zappa F., Bernuzzi S., Perego A., Breschi M., Radice D., 2020, Phys. Rev., D101, 044006 * Ajello et al. (2016) Ajello M., et al., 2016, Astrophys. J., 819, 44 * Annala et al. (2018) Annala E., Gorda T., Kurkela A., Vuorinen A., 2018, Phys. Rev. Lett., 120, 172703 * Arnett (1982) Arnett W. D., 1982, Astrophys. J., 253, 785 * Ashton et al. (2019) Ashton G., et al., 2019, Astrophys. J. Suppl., 241, 27 * Barbieri et al. (2019) Barbieri C., Salafia O. S., Perego A., Colpi M., Ghirlanda G., 2019, Astron. Astrophys., 625, A152 * Barbieri et al. (2020) Barbieri C., Salafia O. S., Perego A., Colpi M., Ghirlanda G., 2020, Eur. Phys. J., A56, 8 * Barnes et al. (2016) Barnes J., Kasen D., Wu M.-R., Martinez-Pinedo G., 2016, Astrophys. J., 829, 110 * Barnes et al. (2020) Barnes J., Zhu Y., Lund K., Sprouse T., Vassh N., McLaughlin G., Mumpower M., Surman R., 2020, Preprint (ArXiv:2010.11182) * Bauswein et al. (2013) Bauswein A., Goriely S., Janka H.-T., 2013, Astrophys.J., 773, 78 * Bauswein et al. (2014) Bauswein A., Stergioulas N., Janka H.-T., 2014, Phys.Rev., D90, 023002 * Bauswein et al. (2017) Bauswein A., Just O., Janka H.-T., Stergioulas N., 2017, Astrophys. J., 850, L34 * Bernuzzi et al. (2014) Bernuzzi S., Nagar A., Balmelli S., Dietrich T., Ujevic M., 2014, Phys.Rev.Lett., 112, 201101 * Bernuzzi et al. (2015a) Bernuzzi S., Nagar A., Dietrich T., Damour T., 2015a, Phys.Rev.Lett., 114, 161103 * Bernuzzi et al. (2015b) Bernuzzi S., Dietrich T., Nagar A., 2015b, Phys. Rev. Lett., 115, 091101 * Bernuzzi et al. (2020) Bernuzzi S., et al., 2020, Mon. Not. Roy. Astron. Soc. * Bovard et al. (2017) Bovard L., Martin D., Guercilena F., Arcones A., Rezzolla L., Korobkin O., 2017, Phys. Rev., D96, 124005 * Breschi et al. (2019) Breschi M., Bernuzzi S., Zappa F., Agathos M., Perego A., Radice D., Nagar A., 2019, Phys. Rev., D100, 104029 * Capano et al. (2020) Capano C. D., et al., 2020, Nature Astron., 4, 625 * Cardelli et al. (1989) Cardelli J. A., Clayton G. C., Mathis J. S., 1989, Astrophys. J., 345, 245 * Chornock et al. (2017) Chornock R., et al., 2017, Astrophys. J., 848, L19 * Coughlin et al. (2017) Coughlin M., Dietrich T., Kawaguchi K., Smartt S., Stubbs C., Ujevic M., 2017, Astrophys. J., 849, 12 * Coughlin et al. (2018) Coughlin M. W., et al., 2018, Mon. Not. Roy. Astron. Soc., 480, 3871 * Coughlin et al. (2019) Coughlin M. W., Dietrich T., Margalit B., Metzger B. D., 2019, Mon. Not. Roy. Astron. Soc., 489, L91 * Coulter et al. (2017) Coulter D. A., et al., 2017, Science * Cowperthwaite et al. (2017) Cowperthwaite P. S., et al., 2017, Astrophys. J., 848, L17 * Damour et al. (2012) Damour T., Nagar A., Villain L., 2012, Phys.Rev., D85, 123007 * De et al. (2018) De S., Finstad D., Lattimer J. M., Brown D. A., Berger E., Biwer C. M., 2018, Phys. Rev. Lett., 121, 091102 * Decoene et al. (2020) Decoene V., Guépin C., Fang K., Kotera K., Metzger B., 2020, JCAP, 04, 045 * Dietrich et al. (2018) Dietrich T., Bernuzzi S., Brügmann B., Tichy W., 2018, in 2018 26th Euromicro International Conference on Parallel, Distributed and Network-based Processing (PDP). pp 682--689 (arXiv:1803.07965), doi:10.1109/PDP2018.2018.00113, https://inspirehep.net/record/1663472/files/1803.07965.pdf * Dietrich et al. (2020) Dietrich T., Coughlin M. W., Pang P. T. H., Bulla M., Heinzel J., Issa L., Tews I., Antier S., 2020, Science, 370, 1450 * Endrizzi et al. (2020) Endrizzi A., et al., 2020, Eur. Phys. J. A, 56, 15 * Essick et al. (2020) Essick R., Tews I., Landry P., Reddy S., Holz D. E., 2020, Phys. Rev. C, 102, 055803 * Even et al. (2020) Even W., et al., 2020, Astrophys. J., 899, 24 * Fernández et al. (2015) Fernández R., Quataert E., Schwab J., Kasen D., Rosswog S., 2015, Mon. Not. Roy. Astron. Soc., 449, 390 * Fernández & Metzger (2013) Fernández R., Metzger B. D., 2013, Mon. Not. Roy. Astron. Soc., 435, 502 * Fujibayashi et al. (2018) Fujibayashi S., Kiuchi K., Nishimura N., Sekiguchi Y., Shibata M., 2018, Astrophys. J., 860, 64 * Fujibayashi et al. (2020) Fujibayashi S., Wanajo S., Kiuchi K., Kyutoku K., Sekiguchi Y., Shibata M., 2020, Astrophys. J., 901, 122 * Gamba et al. (2020a) Gamba R., Bernuzzi S., Nagar A., 2020a, preprint (ArXiv:2012.00027), * Gamba et al. (2020b) Gamba R., Breschi M., Bernuzzi S., Agathos M., Nagar A., 2020b, preprint (ArXiv:2009.08467), * Ghirlanda et al. (2019) Ghirlanda G., et al., 2019, Science, 363, 968 * Goodman & Weare (2010) Goodman J., Weare J., 2010, Communication in Applied Mathematics and Computational Science, 5, 65–80 * Grossman et al. (2014) Grossman D., Korobkin O., Rosswog S., Piran T., 2014, Mon. Not. Roy. Astron. Soc., 439, 757 * Hajela et al. (2019) Hajela A., et al., 2019, Astrophys. J. Lett., 886, L17 * Hotokezaka et al. (2013) Hotokezaka K., Kiuchi K., Kyutoku K., Okawa H., Sekiguchi Y.-i., et al., 2013, Phys.Rev., D87, 024001 * Hotokezaka et al. (2018) Hotokezaka K., Beniamini P., Piran T., 2018, Int. J. Mod. Phys., D27, 1842005 * Jiang et al. (2020) Jiang J.-L., Tang S.-P., Wang Y.-Z., Fan Y.-Z., Wei D.-M., 2020, Astrophys. J., 892, 1 * Just et al. (2015) Just O., Obergaulinger M., Janka H. T., 2015, Mon. Not. Roy. Astron. Soc., 453, 3386 * Kasen & Barnes (2019) Kasen D., Barnes J., 2019, Astrophys. J., 876, 128 * Kasen et al. (2013) Kasen D., Badnell N. R., Barnes J., 2013, Astrophys. J., 774, 25 * Kasen et al. (2015) Kasen D., Fernández R., Metzger B., 2015, Mon. Not. Roy. Astron. Soc., 450, 1777 * Kasen et al. (2017) Kasen D., Metzger B., Barnes J., Quataert E., Ramirez-Ruiz E., 2017, Nature * Kawaguchi et al. (2020) Kawaguchi K., Shibata M., Tanaka M., 2020, Astrophys. J., 889, 171 * Korobkin et al. (2012) Korobkin O., Rosswog S., Arcones A., Winteler C., 2012, Mon. Not. Roy. Astron. Soc., 426, 1940 * Lippuner & Roberts (2015) Lippuner J., Roberts L. F., 2015, Astrophys. J., 815, 82 * Margalit & Metzger (2017) Margalit B., Metzger B. D., 2017, Astrophys. J., 850, L19 * Martin et al. (2015) Martin D., Perego A., Arcones A., Thielemann F.-K., Korobkin O., Rosswog S., 2015, Astrophys. J., 813, 2 * Metzger (2020) Metzger B. D., 2020, Living Rev. Rel., 23, 1 * Metzger & Berger (2012) Metzger B., Berger E., 2012, Astrophys.J., 746, 48 * Metzger & Fernández (2014) Metzger B. D., Fernández R., 2014, Mon.Not.Roy.Astron.Soc., 441, 3444 * Metzger et al. (2008) Metzger B., Piro A., Quataert E., 2008, Mon.Not.Roy.Astron.Soc., 390, 781 * Metzger et al. (2010) Metzger B. D., Arcones A., Quataert E., Martinez-Pinedo G., 2010, Mon. Not. Roy. Astron. Soc., 402, 2771 * Miller et al. (2019) Miller J. M., et al., 2019, Phys. Rev., D100, 023008 * Nagar et al. (2018) Nagar A., et al., 2018, Phys. Rev., D98, 104052 * Nakar et al. (2018) Nakar E., Gottlieb O., Piran T., Kasliwal M. M., Hallinan G., 2018, Astrophys. J., 867, 18 * Nedora et al. (2019) Nedora V., Bernuzzi S., Radice D., Perego A., Endrizzi A., Ortiz N., 2019, Astrophys. J., 886, L30 * Nedora et al. (2020) Nedora V., et al., 2020, preprint (ArXiv:2011.11110) * Nedora et al. (2021) Nedora V., et al., 2021, Astrophys. J., 906, 98 * Nelson et al. (2013) Nelson B., Ford E. B., Payne M. J., 2013, The Astrophysical Journal, Supplement Series, 210, 11 * Nicholl et al. (2017) Nicholl M., et al., 2017, Astrophys. J., 848, L18 * Nynka et al. (2018) Nynka M., Ruan J. J., Haggard D., Evans P. A., 2018, Astrophys. J. Lett., 862, L19 * Oechslin et al. (2006) Oechslin R., Janka H.-T., Marek A., 2006, Astron.Astrophys. * Perego et al. (2014) Perego A., Rosswog S., Cabezon R., Korobkin O., Kaeppeli R., et al., 2014, Mon.Not.Roy.Astron.Soc., 443, 3134 * Perego et al. (2017a) Perego A., Radice D., Bernuzzi S., 2017a, Astrophys. J., 850, L37 * Perego et al. (2017b) Perego A., Yasin H., Arcones A., 2017b, J. Phys., G44, 084007 * Perego et al. (2019) Perego A., Bernuzzi S., Radice D., 2019, Eur. Phys. J., A55, 124 * Pian et al. (2017) Pian E., et al., 2017, Nature * Piran et al. (2013) Piran T., Nakar E., Rosswog S., 2013, Mon. Not. Roy. Astron. Soc., 430, 2121 * (89) Pozzo W. D., Veitch J., , https://github.com/johnveitch/cpnest, doi:10.5281/zenodo.835874 * Raaijmakers et al. (2020) Raaijmakers G., et al., 2020, Astrophys. J. Lett., 893, L21 * Radice & Dai (2019) Radice D., Dai L., 2019, Eur. Phys. J., A55, 50 * Radice et al. (2016) Radice D., Galeazzi F., Lippuner J., Roberts L. F., Ott C. D., Rezzolla L., 2016, Mon. Not. Roy. Astron. Soc., 460, 3255 * Radice et al. (2017) Radice D., Bernuzzi S., Del Pozzo W., Roberts L. F., Ott C. D., 2017, Astrophys. J., 842, L10 * Radice et al. (2018a) Radice D., Perego A., Bernuzzi S., Zhang B., 2018a, Mon. Not. Roy. Astron. Soc., 481, 3670 * Radice et al. (2018b) Radice D., Perego A., Zappa F., Bernuzzi S., 2018b, Astrophys. J., 852, L29 * Radice et al. (2018c) Radice D., Perego A., Hotokezaka K., Bernuzzi S., Fromm S. A., Roberts L. F., 2018c, Astrophys. J. Lett., 869, L35 * Radice et al. (2018d) Radice D., Perego A., Hotokezaka K., Fromm S. A., Bernuzzi S., Roberts L. F., 2018d, Astrophys. J., 869, 130 * Roberts et al. (2011) Roberts L. F., Kasen D., Lee W. H., Ramirez-Ruiz E., 2011, Astrophys.J., 736, L21 * Rosswog et al. (2013) Rosswog S., Piran T., Nakar E., 2013, Mon. Not. Roy. Astron. Soc., 430, 2585 * Rosswog et al. (2014) Rosswog S., Korobkin O., Arcones A., Thielemann F. K., Piran T., 2014, Mon. Not. Roy. Astron. Soc., 439, 744 * Rosswog et al. (2018) Rosswog S., Sollerman J., Feindt U., Goobar A., Korobkin O., Wollaeger R., Fremling C., Kasliwal M. M., 2018, Astron. Astrophys., 615, A132 * Savchenko et al. (2017) Savchenko V., et al., 2017, Astrophys. J., 848, L15 * Shibata et al. (2005) Shibata M., Taniguchi K., Uryu K., 2005, Phys. Rev., D71, 084021 * Siegel & Metzger (2018) Siegel D. M., Metzger B. D., 2018, Astrophys. J., 858, 52 * Sivia & Skilling (2006) Sivia D. S., Skilling J., 2006, Data Analysis - A Bayesian Tutorial, 2nd edn. Oxford Science Publications, Oxford University Press * Skilling (2006) Skilling J., 2006, Bayesian Anal., 1, 833 * Smartt et al. (2017) Smartt S. J., et al., 2017, Nature * Smith et al. (2020) Smith R. J. E., Ashton G., Vajpeyi A., Talbot C., 2020, Mon. Not. Roy. Astron. Soc., 498, 4492 * Tanaka et al. (2017) Tanaka M., et al., 2017, Publ. Astron. Soc. Jap. * Tanaka et al. (2020) Tanaka M., Kato D., Gaigalas G., Kawaguchi K., 2020, MNRAS, 496, 1369 * Tanvir et al. (2017) Tanvir N. R., et al., 2017, Astrophys. J., 848, L27 * Valenti et al. (2017) Valenti S., et al., 2017, Astrophys. J., 848, L24 * Veitch et al. (2015) Veitch J., et al., 2015, Phys. Rev., D91, 042003 * Villar et al. (2017a) Villar V. A., Berger E., Metzger B. D., Guillochon J., 2017a, Astrophys. J., 849, 70 * Villar et al. (2017b) Villar V. A., et al., 2017b, Astrophys. J., 851, L21 * Winkler et al. (2011) Winkler C., Diehl R., Ubertini P., Wilms J., 2011, Space Science Reviews, 161, 149–177 * Wollaeger et al. (2018) Wollaeger R. T., et al., 2018, Mon. Not. Roy. Astron. Soc., 478, 3298 * Wu et al. (2016) Wu M.-R., Fernández R., Martínez-Pinedo G., Metzger B. D., 2016, Mon. Not. Roy. Astron. Soc., 463, 2323 * Zappa et al. (2019) Zappa F., Bernuzzi S., Pannarale F., Mapelli M., Giacobbo N., 2019, Phys. Rev. Lett., 123, 041102 * Zhu et al. (2020) Zhu Y., Lund K., Barnes J., Sprouse T., Vassh N., McLaughlin G., Mumpower M., Surman R., 2020, preprint (ArXiv:2010.03668) * de Jesús Mendoza-Temis et al. (2015) de Jesús Mendoza-Temis J., Wu M.-R., Martinez-Pinedo G., Langanke K., Bauswein A., Janka H.-T., 2015, Phys. Rev., C92, 055805
16k
arxiv_papers
2101.01206
# Conformal upper bounds for the volume spectrum Zhichao Wang University of Toronto [email protected] ###### Abstract. In this paper, we prove upper bounds for the volume spectrum of a Riemannian manifold that depend only on the volume, dimension and a conformal invariant. ## 1\. Introduction Let $(M^{n},g)$ be a closed Riemannian manifold of dimension $n\geq 2$. In [Alm62], Almgren proved that the space of mod 2 relative cycles $\mathcal{Z}_{n-1}(M;\mathbb{Z}_{2})$ is weakly homotopic to $\mathbb{RP}^{\infty}$; see also [LMN16]*§2.5. By performing a min-max procedure, Gromov [Gro88] defined the volume spectrum of $M$, which is a sequence of non-decreasing positive numbers $0<\omega_{1}(M,g)\leq\omega_{2}(M,g)\leq\cdots\leq\omega_{k}(M,g)\to\infty,$ depending only on $M$ and $g$. Moreover, Gromov [Gro88] also showed that for each $g$, $\omega_{k}$ grows like $k^{\frac{1}{n}}$; see also Guth [Guth09]. For closed Riemannian surfaces (i.e. $n=2$), Y. Liokumovich [Lio16] bounded all of the volume spectrum using the genus of the surfaces. In this paper, we generalize these results and prove conformal upper bounds for all of the volume spectrum of closed Riemannian manifolds. ###### Theorem 1.1. There exists a constant $C=C(n)$ such that for any $n$-dimensional closed Riemannian manifold $(M,g)$, we have $\omega_{k}(M,g)\leq C|M|_{g}^{\frac{n-1}{n}}\max\\{k^{\frac{1}{n}},\mathrm{MCV}(M,g)^{\frac{1}{n}}\\}.$ Here $|\Sigma|_{g^{\prime}}$ is denoted as the $\mathcal{H}^{m}$-measure with respect to $g^{\prime}$ for any $m$-dimensional submanifold $\Sigma$ of $M$ and $\mathrm{MCV}(M,g):=\inf\\{|M|_{g_{0}}:g_{0}\text{ is a metric conformal to $g$ and $\operatorname{Ric}_{g_{0}}(M)\geq-(n-1)$}\\},$ which is called the min-conformal volume of $M$; c.f. [Has11]*§1[GL17]*Definition 1.2. For simplicity, we use $[g]$ to denote the collection of Riemannian metrics that are conformal to $g$. ###### Remark 1.2. We make several remarks here: 1. (1) Note that by the uniformization theorem, $\mathrm{MCV}(M)\leq 2\gamma$ if $M$ is a closed surface of genus $\gamma$. Then Theorem 1.1 is exactly the same with [Lio16]. 2. (2) From the proof, the estimates also hold for compact domains $N\subset M$ in Theorem 1.1, i.e. for any $g_{0}\in[g]$ with $\operatorname{Ric}_{g_{0}}(M)\geq-(n-1)$ and $N\subset M$, $\omega_{k}(N,g)\leq C|N|_{g}^{\frac{n-1}{n}}\max\\{k^{\frac{1}{n}},|N|_{g_{0}}^{\frac{1}{n}}\\}.$ 3. (3) Our theorem is sharp in some sense. In general cases (not in a conformal class), L. Guth [Guth09]*Section 5 gave a counterexample to this question for the first width in the volume spectrum. In other words, a closed oriented Riemannian $n$-manifold may have volume 1 and arbitrarily large $\omega_{1}(M,g)$. We point out that Glynn-Adey-Liokumovich [GL17] proved conformal upper bounds for the first width in the volume spectrum (i.e. the case of $k=1$), which will be used in this paper. For closed Riemannian manifolds with non-negative Ricci curvature, the uniform upper bounds for the volume spectrum was proved by Glynn-Adey-Liokumovich [GL17] and Sabourau [Sab17]. Observe that $\mathrm{MCV}(M,g)=0$ provided that there exists $g_{0}\in[g]$ with $\operatorname{Ric}_{g_{0}}(M)\geq 0$. Hence we have the following corollary. ###### Corollary 1.3. Let $(M,g)$ be a closed Riemannian manifold and there exists $g_{0}\in[g]$ with $\operatorname{Ric}_{g_{0}}(M)\geq 0$. There exists a constant $C=C(n)$ such that $\omega_{k}(M,g)\leq C|M|_{g}^{\frac{n-1}{n}}k^{\frac{1}{n}}.$ To understand the volume spectrum, Gromov [Gro03]*Remark 8.4 had an insightful idea that many properties of the eigenvalues of the Laplacian operators have analogs for the volume spectrum. Furthermore, Gromov conjectured that the volume spectrum $\\{\omega_{k}(M,g)\\}_{k\in\mathbb{N}}$ satisfy a Weyl’s law, which has been fully proved by Liokumovich-Marques-Neves [LMN16]. For Laplacian operators, Korevaar [Kor93] proved the upper bounds for the Neumann eigenvalues of Riemannian manifolds which are conformal to a manifold with non-negative Ricci curvature. Later, Hassannezhad [Has11] obtained the conformal upper bounds for the eigenvalues of the Laplacian in the conformal class of compact Riemannian manifolds. Our Theorem 1.1 and Corollary 1.3 are volume spectrum analogs of the results of Hassannezhad [Has11] and Korevaar [Kor93], respectively. We refer to [LY82][Bus82][CoMa] for the estimates of the Laplacian operators and [NR04][BS10][LZ18][LM20] for some developments of sweepouts by cycles. Due to the development of min-max theory by Almgren [Alm62][Alm65], Pitts [Pi], Schoen-Simon [SS] and Marques-Neves [MN16], the volume spectrum bounds give information about finding minimal hypersurfaces in closed Riemannian manifolds; see [MN17][IMN17][Song18][MNS17][SZ20]. In particular, using the Multiplicity One Theorem proven by X. Zhou [Zhou20] (see also [CM20]), Marques-Neves [MN18] proved that in any closed Riemannian manifold $M$ of dimension $3\leq n\leq 7$, for generic metrics $g$, there exists a sequence of embedded minimal hypersurfaces $\\{\Sigma_{k}\\}$ such that $\omega_{k}(M,g)=\mathcal{H}^{n-1}(\Sigma_{k})\ \ \ \text{ and }\ \ \ \operatorname{index}(\Sigma_{k})=k.$ Then our Theorem 1.1 gives a conformal upper bound for these embedded minimal hypersurfaces. ### Idea of the proof Let $(M,g)$ be a closed Riemannian manifold and $g_{0}\in[g]$ such that $\operatorname{Ric}_{g_{0}}(M)\geq-(n-1)$. Denote by $B_{r}^{0}(p)$ the geodesic ball in $M$ of radius $r$ and center $p$ with respect to $g_{0}$. For simplicity, we use $|\cdot|$ to denote $|\cdot|_{g}$. We first recall the construction of $k$-sweepouts by Gromov [Gro88] and Guth [Guth09]*Section 5, where they proved that if a closed manifold $M$ is divided into a collection of open domains $\\{V_{j}\\}$, then $\omega_{k}(M,g)\leq\Big{|}\bigcup_{j}\partial V_{j}\Big{|}+k\max_{j}\omega_{1}(V_{j},g).$ Then the challenge is to divide $M$ into suitable domains for each $k$. Without loss of generality, we assume that $|M|=|M|_{g_{0}}$. By the work of Glynn-Adey-Liokumovich [GL17], it suffices to consider $k\geq|M|_{g_{0}}$ and $k>100^{n}$. We now fix $k$ and let $\alpha=|M|/k$ and $r=\alpha^{\frac{1}{n}}/C$. The aim is to bound $\omega_{k}(M,g)$ by $C|M|^{\frac{n-1}{n}}k^{\frac{1}{n}}$. In the first step, we subdivide $M$ into domains $\\{D_{j}\\}_{j=1}^{m+1}$ ($m\leq k-1$) such that $|D_{j}|_{g_{0}}<1$ for $1\leq j\leq m$, $\sum|\partial D_{j}|\sim k^{\frac{1}{n}}$ and $|B_{r}^{0}(p)\cap D_{m+1}|\leq\alpha$ for all $p\in M$. This can be done inductively by taking $B^{0}_{r}(p)$ such that its $g$-volume is larger than $\alpha$. Then the length-area method also enables us to control $|\partial D_{j}|$; see Claim 2 for details. The next step is to subdivide $D_{m+1}$. To do this, we always take $B_{r}^{0}(p)$ that has the largest area in the remaining part with respect to $g$. Then the length-area method allows us to find a domain $V_{i}$ between $B_{3r}^{0}(p)$ and $B_{4r}^{0}(p)$ such that its boundary has a desired bound. The difficulty here is that $|B_{4r}^{0}(p)|_{g_{0}}$ is used to bound $|\partial V_{i}|$. And these balls of radius $4r$ will intersect each other. To overcome this, we proved that for each point $x\in D_{m+1}$, the number of $V_{i}$ that contains $x$ is bounded by a uniform constant depending only on $n$. Then using the Hölder’s inequality, we obtain the desired covers for $D_{m+1}$. Finally, we are going to subdivide $D_{j}$ for $1\leq j\leq m$. One of the key ingredients is the isoperimetric inequality developed by Glynn-Adey- Liokumovich [GL17]*Theorem 3.4 (see also Theorem 4.1), which allows us to subdivide $D_{j}$ into two parts. Repeating this process, we finally subdivide $D_{j}$ into $\\{U_{i}^{j}\\}_{i}$ until each small domain has $g$-volume bounded by $|M|/k$. Then using the estimates for the first width in the volume spectrum in [GL17] (see also Theorem 2.2 for compact domains), $k\omega_{1}(U_{i}^{j},g)$ is naturally bounded by $k^{\frac{1}{n}}$. It remains to bound the boundary of $U_{i}^{j}$ that lies in $\mathrm{Int}D_{j}$, which are exactly the isoperimetric hypersurfaces in Theorem 4.1. In Subsection 2.4, a general way will be developed to study this kind of tree decomposition; see Proposition 2.4 for details. We would like to emphasize that $|D_{j}|<1$ is crucial to have the desired bounds in this part. ### Outline This paper is organized as follows. Section 2 includes some results that will be used in this paper and an upper bound for the tree decomposition. In Section 3 and 4, we provide the details to subdivide the conformally thin and thick domains, respectively. Finally, Section 5 is devoted to prove the main theorem. We also give more details of the proof of Theorem 2.2 in Appendix A. ### Acknowledgments We are grateful to Professor Yevgeny Liokumovich for bringing this problem to our attention and many valuable discussions. ## 2\. Preliminary ### 2.1. Notations In this paper, $(M^{n},g)$ is always a closed Riemannian manifold with dimension $n$ and $N$ is a compact domain in $M$ with piecewise smooth boundary. We now recall the formulation in [LMN16]. Let $(N,\partial N,g)\subset\mathbb{R}^{L}$ be a compact Riemannian manifold with piecewise smooth boundary. Let $\mathcal{R}_{k}(N;\mathbb{Z}_{2})$ (resp. $\mathcal{R}_{k}(\partial N;\mathbb{Z}_{2})$) be the space of $k$-dimensional rectifiable currents in $\mathbb{R}^{L}$ with coefficients in $\mathbb{Z}_{2}$ which are supported in $N$ (resp. $\partial N$). Denote by $\mathbf{M}$ the mass norm. Let (2.1) $Z_{k}(N,\partial N;\mathbb{Z}_{2}):=\\{T\in\mathcal{R}_{k}(N;\mathbb{Z}_{2}):\operatorname{spt}(\partial T)\subset\partial N\\}.$ We say that two elements $S_{1},S_{2}\in Z_{k}(N,\partial N;\mathbb{Z}_{2})$ are equivalent if $S_{1}-S_{2}\in\mathcal{R}_{k}(\partial N;\mathbb{Z}_{2})$. Denote by $\mathcal{Z}_{k}(M,\partial N;\mathbb{Z}_{2})$ the space of all such equivalence classes. The mass and flat norms for any $\tau\in\mathcal{Z}_{k}(N,\partial N;\mathbb{Z}_{2})$ are defined by $\mathbf{M}(\tau):=\inf\\{\mathbf{M}(S):S\in\tau\\}\quad\text{ and }\quad\mathcal{F}(\tau):=\inf\\{\mathcal{F}(S):S\in\tau\\}.$ The support of $\tau\in\mathcal{Z}_{k}(M,\partial M;\mathbb{Z}_{2})$ is defined by $\operatorname{spt}(\tau):=\bigcap_{S\in\tau}\operatorname{spt}(S).$ Let $X$ be a finite dimensional simplicial complex. Given $p\in\mathbb{N}$, a continuous map in the flat topology $\Phi:X\rightarrow\mathcal{Z}_{n}(N,\partial N;\mathbb{Z}_{2})$ is called a $k$-sweepout if the $k$-th cup power of $\lambda=\Phi^{*}(\bar{\lambda})$ is non-zero in $H^{k}(X;\mathbb{Z}_{2})$ where $0\neq\bar{\lambda}\in H^{1}(\mathcal{Z}_{n}(N,\partial N;\mathbb{Z}_{2});\mathbb{Z}_{2})\cong\mathbb{Z}_{2}$. Denote by $\mathcal{P}_{k}(N)$ the set of all $k$-sweepouts that are continuous in the flat topology and have no concentration of mass ([MN17]*§3.7), i.e. $\lim_{r\rightarrow 0}\sup\\{\mathbf{M}(\Phi(x)\llcorner B_{r}(q)):x\in X,q\in M\\}=0.$ In [MN17] and [LMN16], the $k$-width of codimension one is defined as (2.2) $\omega_{k}(N,g):=\inf_{\Phi\in\mathcal{P}_{k}}\sup\\{\mathbf{M}(\Phi(x)):x\in\mathrm{dmn}(\Phi)\\}.$ $\\{\omega_{k}(N,g)\\}$ are also called the volume spectrum. ###### Remark 2.1. In this paper, we used the integer rectifiable currents, which is the same with [LZ16]. However, the formulations are equivalent to that in [LMN16]; see [GLWZ19]*Proposition 3.2 for details. ### 2.2. Conformal bounds for the first width In [GL17], Glynn-Adey and Liokumovich proved the uniform bound of the first width for all closed manifolds. With minor modification, their arguments can be applied for compact domains. Such a uniform bound will be used in this paper later. Let $g$ and $g_{0}$ be two Riemannian metrics on $M$. For any $m$-dimensional submanifold $\Sigma$ of $M$, we use $|\Sigma|$ and $|\Sigma|_{g_{0}}$ to denote the $\mathcal{H}^{m}$-measure with respect to $g$ and $g_{0}$. ###### Theorem 2.2 (Glynn-Adey-Liokumovich [GL17]). Let $N$ be a compact domain of a closed Riemannian manifold $(M,g)$ with dimension $n$. Let $g_{0}$ be another metric on $M$ which is conformal to $g$ and $\operatorname{Ric}_{g_{0}}(M)\geq-1$. There exists a constant $K$ depending only on the dimension of $N$ such that $\omega_{1}(N,g)\leq K\cdot|N|^{\frac{n-1}{n}}(1+|N|^{\frac{1}{n}}_{g_{0}}).$ For completeness, we sketch the idea of the proof here and give more details in Appendix A. We first handle the case that $N$ has smooth boundary. Following the steps in [GL17], we decompose the domain with small volume into small pieces so that the argument in [GL17]*Proposition 2.3 can be applied, and then we use the inductive method in [GL17]*Theorem 5.1. To decompose $D\subset N$ with small volume, we will cut the part intersecting $\partial N$. Then the regularity theory [Mor03]*Theorem 4.7 (see also [GLWZ19]*Theorem 4.7) for the free boundary minimizing problem is used. In order to show that such a minimizing hypersurface does not intersect a smaller ball, we employ the monotonicity formula in [GLZ16]*Theorem 3.4. Finally, for compact domain with piecewise smooth boundary, we can take a tubular neighborhood $U$ with $|U|\leq 2|N|$ and $|U|_{g_{0}}\leq 2|N|_{g_{0}}$ and $U$ has smooth boundary. Then the desired inequality follows from $\omega_{1}(N,g)\leq\omega_{1}(U,g)$. ### 2.3. The length-area method Let $(M,g)$ be a closed Riemannian manifold and $N$ be a compact domain with piecewise smooth boundary. Let $g_{0}$ be a metric on $M$ which is conformal to $g$ and $\operatorname{Ric}_{g_{0}}(M)\geq-(n-1)$. Denote by $\nabla$ and $\nabla^{0}$ the Levi-Civita connection with respect to $g_{0}$ and $g$. For any compact domain $D\subset M$, denote by $\mathcal{N}_{r}^{0}(A):=\\{x\in M:\operatorname{dist}_{g_{0}}(x,A)\leq r\\},$ where $\operatorname{dist}_{g_{0}}(\cdot,\cdot)$ is the distance with respect to $g_{0}$. Recall that $|\Sigma|$ and $|\Sigma|_{g_{0}}$ are used to denote the $\mathcal{H}^{m}$-measure with respect to $g$ and $g_{0}$ if $\Sigma$ is a $m$-dimensional submanifold of $M$. The following inequality is from the well-known length-area method (see [Gro83]*§5[GL17]*Theorem 3.4[Lio16]*Lemma 4.1) and will be used in this paper. ###### Proposition 2.3. For any compact domain $D\subset N$ and $r>0$, there exists a compact domain $V$ of $N$ satisfying $D\subset V\subset\mathcal{N}_{r}^{0}(D)$ and $|\partial U\cap\mathrm{Int}N|\leq(1/r)\cdot|N\cap\mathcal{N}^{0}_{r}(D)\setminus D|_{g_{0}}^{\frac{1}{n}}\cdot|N\cap\mathcal{N}^{0}_{r}(D)\setminus D|^{\frac{n-1}{n}}.$ ###### Proof. We present the proof in [GL17]*Theorem 3.4 here. For $x\in M$, denote by $f(x)=\operatorname{dist}_{g_{0}}(x,D).$ By the co-area formula, $\displaystyle\int_{0}^{r}|f^{-1}(t)\cap\mathrm{Int}N|dt$ $\displaystyle=\int_{f^{-1}(0,r)\cap N}|\nabla f|d\mathcal{H}^{n}(g)$ $\displaystyle\leq\Big{(}\int_{f^{-1}(0,r)\cap N}|\nabla f|^{n}d\mathcal{H}^{n}(g)\Big{)}^{\frac{1}{n}}\cdot|f^{-1}(0,r)\cap N|^{\frac{n-1}{n}}$ $\displaystyle=|f^{-1}(0,r)\cap N|^{\frac{1}{n}}_{g_{0}}\cdot|f^{-1}(0,r)\cap N|^{\frac{n-1}{n}}.$ Here the last equality follows from $|\nabla f|^{n}d\mathcal{H}^{n}(g)=|\nabla^{0}f|^{n}d\mathcal{H}^{n}(g_{0})$. Note that $f^{-1}(0,r)=\mathcal{N}^{0}_{r}(D)\setminus D.$ Hence Proposition 2.3 is proved. ∎ ### 2.4. Tree decomposition Let $\alpha=\overline{\alpha_{1}\alpha_{2}\cdots\alpha_{m}}$ be an ordered binary array with $\alpha_{j}\in\\{0,1\\}$. Then we define $|\alpha|=m$. For two binary arrays $\alpha$ and $\beta$, we say $\alpha\preceq\beta$ if $\alpha_{j}=\beta_{j}$ for all $j\leq|\alpha|$. We say $\Lambda$ is an admissible tree provided the following holds: * • if $\alpha\in\Lambda$, then $\beta\in\Lambda$ for any $\beta\preceq\alpha$; * • $\overline{\alpha 0}\in\Lambda$ if and only if $\overline{\alpha 1}\in\Lambda$; Denote by $\partial\Lambda=\\{\alpha\in\Lambda:\text{ if }\beta\in\Lambda\text{ with }\alpha\preceq\beta,\text{ then }\beta=\alpha\\}$. Let $\Lambda$ be an admissible tree and $\lambda\in(0,1/2]$. For any positive real number $X\geq 1$, we say a sequence of real numbers $\\{X_{\alpha}\\}$ is a $(\Lambda,\lambda)$-decomposition if * • $X=X_{0}+X_{1}$ and $X_{i}>\lambda X$ for $i\in\\{0,1\\}$; * • $X_{\alpha}=X_{\overline{\alpha 0}}+X_{\overline{\alpha 1}}$ and $X_{\overline{\alpha i}}\geq\lambda X_{\alpha}$ for all $\alpha\in\Lambda\setminus\partial\Lambda$ and $i\in\\{0,1\\}$; * • $X_{\alpha}\geq 1$ for all $\alpha\in\Lambda$. In this section, $\lambda\in(0,1/2)$ is a constant. Let $\widetilde{\lambda}=\big{[}\lambda^{\frac{n-1}{n}}+(1-\lambda)^{\frac{n-1}{n}}-1\big{]}^{-1}.$ Then for any $t\in[\lambda,1-\lambda]$, we have (2.3) $\widetilde{\lambda}\cdot\big{(}t^{\frac{n-1}{n}}+(1-t)^{\frac{n-1}{n}}-1\big{)}\geq 1.$ For any $X\geq 1$, we define $\mathcal{N}(X):=\sup\Big{\\{}X^{\frac{n-1}{n}}+\sum_{\alpha\in\Lambda}X_{\alpha}^{\frac{n-1}{n}}:\\{X_{\alpha}\\}\text{ is a $(\Lambda,\lambda)$-decomposition for some admissible tree }\Lambda\Big{\\}}.$ The main result in this subsection is that $\mathcal{N}(X)$ has linear growth. ###### Proposition 2.4. For any $X\geq 1$, we have $\mathcal{N}(X)+\widetilde{\lambda}X^{\frac{n-1}{n}}\leq(1+\widetilde{\lambda})X.$ ###### Proof. For any $(\Lambda,\lambda)$-decomposition $\\{X_{\alpha}\\}$, we have that $X^{\frac{n-1}{n}}+\sum_{\alpha\in\Lambda}X_{\alpha}^{\frac{n-1}{n}}\leq X^{\frac{n-1}{n}}+\mathcal{N}(X_{0})+\mathcal{N}(X_{1})\leq X^{\frac{n-1}{n}}+\sup_{\lambda\leq t\leq 1-\lambda}(\mathcal{N}(tX)+\mathcal{N}((1-t)X)).$ This implies that $\mathcal{N}(X)\leq X^{\frac{n-1}{n}}+\sup_{\lambda\leq t\leq 1-\lambda}(\mathcal{N}(tX)+\mathcal{N}((1-t)X)).$ Denote by $\widetilde{\mathcal{N}}(X)=\mathcal{N}(X)+\widetilde{\lambda}X^{\frac{n-1}{n}}$. Then (2.4) $\widetilde{\mathcal{N}}(X)\leq\sup_{\lambda\leq t\leq 1-\lambda}(\widetilde{\mathcal{N}}(tX)+\widetilde{\mathcal{N}}((1-t)X)),$ where we used the fact (2.3). For any $X\in[1,2)$, we have $\widetilde{\mathcal{N}}(X)=X^{\frac{n-1}{n}}+\widetilde{\lambda}X^{\frac{n-1}{n}}\leq(1+\widetilde{\lambda})X$. Now we prove the inequality inductively. Suppose that it holds true for $X<Y$ ($Y\geq 2$). Then for any $X\in[Y,Y+\lambda]$ and $t\in[\lambda,1-\lambda]$, we have $tX\leq(1-\lambda)(Y+\lambda)\leq Y-\lambda.$ Hence $\widetilde{\mathcal{N}}(tX)\leq(1+\widetilde{\lambda})tX\ \ \ \text{ and }\ \ \ \widetilde{\mathcal{N}}((1-t)X)\leq(1+\widetilde{\lambda})(1-t)X,$ Together with (2.4), we conclude that $\widetilde{\mathcal{N}}(X)\leq\sup_{\lambda\leq t\leq 1-\lambda}\big{[}(1+\widetilde{\lambda})tX+(1+\widetilde{\lambda})(1-t)X\big{]}=(1+\widetilde{\lambda})X.$ This finishes the proof of Proposition 2.4. ∎ ## 3\. Dividing conformally thin domains Let $(M,g)$ be a closed Riemannian manifold and $g_{0}\in[g]$ with $\operatorname{Ric}_{g_{0}}(M)\geq-(n-1)$. Denote by $B^{0}_{r}(p)$ the geodesic ball in $(M,g_{0})$ with center $p$ and radius $r$. In this section, we divide the compact domains $N$ that geodesic balls in $(M,g_{0})$ of radius $r$ satisfying $|B_{r}^{0}(p)\cap N|\leq\alpha,\ \ \ \forall p\in M,$ where $r$ and $\alpha$ are given constants. This kind of domains are called to be conformally thin. Denote by $v(r,n)$ the volume of the geodesic ball in an $n$-dimensional hyperbolic manifold (with sectional curvature $-1$). Denote by $C(r)=\max_{0<t\leq r}\Big{\\{}1+\Big{[}\frac{v(9r/2,n)}{v(r/2,n)}\Big{]}\Big{\\}}.$ Then in any complete Riemannian manifold with $\operatorname{Ric}\geq-(n-1)$, every geodesic ball of radius $4s$ can be covered by $C(r)$ many balls of radius $s$ for all $s\in(0,r]$. Note that $C(r)$ is a constant depending only on $r$ and $n$; c.f. [CoMa]*Example 2.1. Let $C_{0}=C_{0}(n)$ be the constant such that for $r<10$, $v(r,n)\leq C_{0}r^{n}.$ By the classical Bishop-Gromov inequality, a geodesic ball with radius $r<10$ in a Riemannian manifold with $\operatorname{Ric}\geq-(n-1)$ has the volume bounded by $C_{0}r^{n}$ from above. Let $K$ be the constant in Theorem 2.2. ###### Lemma 3.1. Let $N$ be a compact domain with (possibly empty) piecewise smooth boundary in some closed Riemannian manifold $(M,g)$. Suppose that there exist $\alpha>0$ and $r\in(0,1)$ satisfying $|B^{0}_{r}(p)\cap N|\leq\alpha$ for all $p\in M$. Then $N$ can be divided into finitely many open domains $\\{V_{j}\\}_{j=1}^{L}$ by $\cup\partial V_{j}$ satisfying (3.1) $\displaystyle\Big{|}\bigcup_{j=1}^{L}\partial V_{j}\cap\mathrm{Int}N\Big{|}\leq(C_{1}/r)\cdot|N|^{\frac{1}{n}}_{g_{0}}\cdot|N|^{\frac{n-1}{n}};$ (3.2) $\displaystyle\omega_{1}(V_{j},g)\leq C_{1}\alpha^{\frac{n-1}{n}}\ \ \text{ for }\ \ 1\leq j\leq L,$ where $C_{1}=C(r/2)C(r)+(4C_{0}+1)K\cdot C(r)$. ###### Proof. Since $\operatorname{Ric}_{g_{0}}(M)\geq-(n-1)$ and $r<1$, then we have (3.3) $|B_{4r}^{0}(p)|_{g_{0}}\leq C_{0}(4r)^{n}.$ Now we construct $\\{V_{j}\\}$ inductively. Let $V_{0}=\emptyset$. Suppose we have $V_{0},\cdots,V_{j}$ and $M\setminus\cup_{i=1}^{j}\overline{V}_{i}\neq\emptyset$. Then we take $p_{j+1}\in M\setminus\cup_{i=0}^{j}\overline{V}_{i}$ such that for all $p\in M$, $\Big{|}N\cap B_{r}^{0}(p_{j+1})\setminus\bigcup_{i=0}^{j}V_{i}\Big{|}\geq\Big{|}N\cap B_{r}^{0}(p)\setminus\bigcup_{i=0}^{j}V_{i}\Big{|}.$ Note that $B_{4r}^{0}(p_{j+1})$ is covered by $C(r)$ many balls of radius $r$. It follows that $\Big{|}N\cap B_{4r}^{0}(p_{j+1})\setminus\bigcup_{i=0}^{j}V_{i}\Big{|}\leq C(r)\Big{|}N\cap B_{r}^{0}(p_{j+1})\setminus\bigcup_{i=0}^{j}V_{i}\Big{|}\leq C(r)\alpha.$ Then by Proposition 2.3, we take $V_{j+1}$ satisfying (3.4) $B_{3r}^{0}(p_{j+1})\cap N\setminus\bigcup_{i=0}^{j}V_{i}\subset V_{j+1}\subset B_{4r}^{0}(p_{j+1})\cap N\setminus\bigcup_{i=0}^{j}V_{i}$ and (3.5) $\Big{|}\partial V_{j+1}\cap\mathrm{Int}(N\setminus\bigcup_{i=0}^{j}V_{i})\Big{|}\leq\frac{1}{r}\cdot|B_{4r}^{0}(p_{j+1})\cap N|^{\frac{n-1}{n}}\cdot|B_{4r}^{0}(p_{j+1})\cap N|_{g_{0}}^{\frac{1}{n}}.$ By Theorem 2.2, (3.6) $\omega_{1}(V_{j+1},g)\leq K|V_{j+1}|^{\frac{n-1}{n}}(1+\big{|}B_{4r}^{0}(p_{j+1})\big{|}_{g_{0}}^{\frac{1}{n}})\leq(4C_{0}+1)K\cdot C(r)\alpha^{\frac{n-1}{n}}.$ Here we used (3.4) and (3.3) in the last inequality. Observe that $p_{j+1}\notin B_{2r}(p_{i})$ for $i\leq j$, which implies that $B_{r}^{0}(p_{j+1})\cap B_{r}^{0}(p_{i})=\emptyset,\ \ \ 1\leq i\leq j.$ Then there exists $L\geq 1$ such that $N=\bigcup_{j=1}^{L}\overline{V}_{j}.$ It remains to prove that these open sets satisfy our requirements. We first prove that every $x\in M$ is contained in at most $C(r/2)\cdot C(2r)$ many $V_{j}$. Namely, if $x\in V_{j}$, then $B^{0}_{r}(p_{j})\subset B_{5r}^{0}(x)$. Now let $J(x)=\\#\\{V_{j}:1\leq j\leq L\text{ and }B_{r}^{0}(p_{j})\subset B_{5r}^{0}(x)\\}.$ Note that $B_{5r}^{0}(x)$ can be covered by $C(r/2)C(2r)$ many balls $\\{B^{0}_{r/2}(z_{i})\\}$ in $M$. By taking $z_{j}$ such that $p_{j}\in B^{0}_{r/2}(z_{j})$, then we have $B^{0}_{r/2}(z_{j})\subset B_{r}^{0}(p_{j})$. Thus $J(x)\leq C(r/2)C(2r)$. By (3.5), we have $\displaystyle\Big{|}\bigcup_{j=1}^{L}\partial V_{j}\cap\mathrm{Int}N\Big{|}$ $\displaystyle=\sum_{j=0}^{L-1}\Big{|}\partial V_{j+1}\cap\mathrm{Int}(N\setminus\bigcup_{i=0}^{j}V_{i})\Big{|}$ $\displaystyle\leq\sum_{j=1}^{L}\frac{1}{r}\cdot|B_{4r}^{0}(p_{j})\cap N|^{\frac{n-1}{n}}\cdot|B_{4r}^{0}(p_{j})\cap N|_{g_{0}}^{\frac{1}{n}}$ $\displaystyle\leq\frac{1}{r}\cdot\Big{(}\sum_{j=1}^{L}|B_{4r}^{0}(p_{j})\cap N|\Big{)}^{\frac{n-1}{n}}\cdot\Big{(}\sum_{j=1}^{m}|B_{4r}^{0}(p_{j})\cap N|_{g_{0}}\Big{)}^{\frac{1}{n}}$ $\displaystyle\leq C(r/2)C(r)\cdot\frac{1}{r}\cdot|N|^{\frac{n-1}{n}}\cdot|N|_{g_{0}}^{\frac{1}{n}}.$ Together with (3.6), Lemma 3.1 follows by taking $C_{1}=C(r/2)C(r)+(4C_{0}+1)K\cdot C(r)$. ∎ ## 4\. Dividing conformally thick domains Let $(M,g)$ be a closed manifold and $g_{0}\in[g]$ such that $\operatorname{Ric}_{g_{0}}(M)\geq-(n-1)$. Let $N$ be a compact domain in $M$ with piecewise smooth boundary. In this section, we estimates the volume spectrum of the domains with small $g_{0}$-volume. We first recall the isoperimetric inequality developed by Glynn-Adey- Liokumovich in [GL17], which is a consequence of the length-area method. ###### Theorem 4.1 ([GL17]*Theorem 3.4). There exists a constant $c(n)$ such that the following holds: Let $U\subset M$ be an open subset. There exists an $(n-1)$-submanifold $\Sigma\subset U$ subdividing $U$ into two open sets $U_{1}$ and $U_{2}$ such that $|U_{i}|\geq 25^{-n}|U|$ and $|\Sigma|\leq c(n)\max\\{1,|U|_{g_{0}}^{\frac{1}{n}}\\}|U|^{\frac{n-1}{n}}$. Now we are ready to prove the main result of this section. ###### Theorem 4.2. There exists $C_{2}=C_{2}(n)$ satisfying the following: for every positive integer $k$, each closed $n$-dimensional Riemannian manifold $(M,g)$ and compact domain $N\subset M$ with $|N|_{g_{0}}\leq 1$, there exists a collection of compact domains $\\{U_{j}\\}$ such that $N=\cup\overline{U}_{j}$ and (4.1) $\big{|}\cup_{j}\partial U_{j}\cap(\mathrm{Int}N)\big{|}+k\max_{j}\omega_{1}(U_{j},g)\leq C_{2}|N|^{\frac{n-1}{n}}k^{\frac{1}{n}}.$ As a corollary, $\omega_{k}(N,g)\leq C_{2}|N|^{\frac{n-1}{n}}k^{\frac{1}{n}}$. ###### Proof. Without loss of generality, we assume that $|N|=1$. Let $K>1$ be the constant in Theorem 2.2. Then every compact domain $N^{\prime}\subset M$ satisfies (4.2) $\omega_{1}(N^{\prime},g)\leq K|N^{\prime}|^{\frac{n-1}{n}}.$ Now let $k>50^{n}$. Then by Theorem 4.1, there exists an $(n-1)$-submanifold $\Sigma\subset M$ subdividing $N$ into two open sets $M_{0}$ and $N_{1}$ such that $|N_{1}|\geq|N_{0}|\geq 1/25^{n}$ and $|\Sigma|\leq c(n)\max\\{1,|N|^{\frac{1}{n}}_{g_{0}}\\}=c(n)$. Let $\overline{N}_{\alpha}$ be a compact domain of $N$, where $\alpha=\overline{i_{1}i_{2}\cdots i_{|\alpha|}}$ and $i_{j}\in\\{0,1\\}$. If $|N_{\alpha}|\geq 50^{n}/k$, then using Theorem 4.1 again, there exists an $(n-1)$-submanifold $\Sigma_{\alpha}$ subdividing $N_{\alpha}$ into two open sets $N_{\overline{\alpha 0}}$ and $N_{\overline{\alpha 1}}$ such that $|N_{\overline{\alpha 1}}|_{g}\geq|N_{\overline{\alpha 0}}|\geq|N_{\alpha}|/25^{n}$ and (4.3) $|\Sigma_{\alpha}|\leq c(n)|N_{\alpha}|^{\frac{n-1}{n}}\max\\{1,|N_{\alpha}|_{g_{0}}^{\frac{1}{n}}\\}=c(n)|N_{\alpha}|^{\frac{n-1}{n}}.$ Note that we always have $k|N_{\alpha 1}|\geq k|N_{\overline{\alpha 0}}|\geq k|N_{\alpha}|/25^{n}\geq 2^{n}$. Denote by $\Lambda$ the collection of $\alpha$ appeared in the previous process. Then $\Lambda$ is an admissible tree (see Subsection 2.4). Recall that $\partial\Lambda=\\{\alpha\in\Lambda:\overline{\alpha 0}\notin\Lambda\\}.$ Then we have $N=\bigcup_{\alpha\in\partial\Lambda}N_{\alpha},$ where $|N_{\alpha}|<50^{n}/k$ and $\mathrm{Int}N_{\alpha}\cap\mathrm{Int}N_{\beta}=\emptyset$ for any $\alpha\neq\beta\in\partial\Lambda$. Now we define $\\{U_{j}\\}$ as $\\{N_{\alpha}\\}_{\alpha\in\partial\Lambda}$. Then we prove that such a collection of domains satisfy our requirements. Denote by $k_{\alpha}=k|N_{\alpha}|$. Note that $|N_{\alpha}|<50^{n}/k$. Then for each $\alpha\in\partial\Lambda$, $k_{\alpha}=k|N_{\alpha}|<50^{n}.$ By (4.2), we have $k_{\alpha}\omega_{1}(N_{\alpha},g)<50^{n}\omega_{1}(N_{\alpha},g)\leq 50^{n}K|N_{\alpha}|^{\frac{n-1}{n}},$ which implies that for all $\alpha\in\partial\Lambda$, (4.4) $k\omega_{1}(N_{\alpha},g)\leq k/k_{\alpha}\cdot 50^{n}K|N_{\alpha}|^{\frac{n-1}{n}}=k^{\frac{1}{n}}/k_{\alpha}\cdot 50^{n}K\cdot(k_{\alpha})^{\frac{n-1}{n}}\leq 50^{n}K\cdot k^{\frac{1}{n}}$ Here in the equality, we used $k|N_{\alpha}|=k_{\alpha}$. ###### Claim 1. There exists $K_{1}(n)$ depending only on $n$ such that $\Big{|}\bigcup_{\alpha\in\partial\Lambda}\partial N_{\alpha}\cap\mathrm{Int}N\Big{|}\leq K_{1}(n)k^{\frac{1}{n}}.$ ###### Proof of Claim 1. Note that $\\{k_{\alpha}\\}$ is a $(1/50^{n},\Lambda)$-decomposition. Then by Proposition 2.4 (by letting $\lambda=1/50^{n}$), (4.5) $k^{\frac{n-1}{n}}+\sum_{\alpha\in\Lambda\setminus\partial\Lambda}k_{\alpha}^{\frac{n-1}{n}}\leq(1+\widetilde{\lambda})k,$ where $\widetilde{\lambda}$ is defined by (2.3). Then we have $\displaystyle\Big{|}\bigcup_{\alpha\in\partial\Lambda}\partial N_{\alpha}\cap\mathrm{Int}N\Big{|}$ $\displaystyle=|\Sigma|+\sum_{\alpha\in\Lambda\setminus\partial\Lambda}|\Sigma_{\alpha}|$ $\displaystyle\leq c(n)\Big{(}|N|^{\frac{n-1}{n}}+\sum_{\alpha\in\Lambda\setminus\partial\Lambda}|N_{\alpha}|^{\frac{n-1}{n}}\Big{)}$ $\displaystyle\leq c(n)k^{-\frac{n-1}{n}}\Big{(}k^{\frac{n-1}{n}}+\sum_{\alpha\in\Lambda\setminus\partial\Lambda}k_{\alpha}^{\frac{n-1}{n}}\Big{)}$ $\displaystyle\leq 2c(n)\cdot k^{\frac{1}{n}}(1+\widetilde{\lambda}).$ Here the first inequality is from (4.3); the last one follows from (4.5). Let $K_{1}(n)=2c(n)(1+\widetilde{\lambda}).$ Then Claim 1 is proved. ∎ Recall that $\\{U_{j}\\}$ are exactly $\\{N_{\alpha}\\}_{\alpha\in\partial\Lambda}$. Using Claim 1 and (4.4), we obtain $|\cup_{j}\partial U_{j}\cap\mathrm{Int}N|+k\max_{j}\omega_{1}(U_{j},g)\leq(50^{n}K+K_{1}(n))k^{\frac{1}{n}}.$ This is the desired inequality by letting $C_{2}=50^{n}K+K_{1}(n)$. ∎ ## 5\. The conformal upper bounds In this section, we prove the conformal upper bounds for the volume spectrum. We will first divide the manifold into conformally thin and thick domains and then Lemma 3.1 and Theorem 4.2 can be applied respectively. Recall that $|\cdot|$ and $|\cdot|_{g_{0}}$ are denoted as the Hausdorff measure with respect to $g$ and $g_{0}$. The following result is equivalent to Theorem 1.1. ###### Theorem 5.1. There exists a constant $C_{3}=C_{3}(n)$ such that for any $n$-dimensional closed Riemannian manifold $(M,g)$, we have $\omega_{k}(M,g)\leq C_{3}|M|^{\frac{n-1}{n}}\max\\{k^{\frac{1}{n}},|M|_{g_{0}}^{\frac{1}{n}}\\},$ where $g_{0}$ is conformal to $g$ and $\operatorname{Ric}_{g_{0}}(M)\geq-(n-1)$. ###### Proof. Without loss of generality, we assume that $|M|(:=|M|_{g})=|M|_{g_{0}}$. For any $k>100^{n}$, define $r_{k}=\frac{1}{4}\cdot\Big{(}\frac{|M|}{2kC_{0}C(1)}\Big{)}^{\frac{1}{n}},\ \ \ \text{ and }\ \ \ \alpha_{k}=\frac{|M|}{k}.$ Denote by $\bar{k}=\Big{[}\frac{|M|}{2C(1)}\Big{]}+1.$ Then for any $k\geq\bar{k}$, we have $r_{k}<1/4$. ###### Claim 2. There exist $m(\leq k-1)$ many domains $\\{D_{j}\\}_{j=1}^{m}$ such that * • $|D_{j}|_{g_{0}}<1$ and $|\cup\partial D_{j}|\leq 4C_{0}C(1)|M|^{\frac{n-1}{n}}\cdot k^{\frac{1}{n}}$; * • $|B_{r_{k}}^{0}(p)\setminus\cup_{j=1}^{m}D_{j}|<\alpha_{k}$ for all $p\in M$. ###### Proof of Claim 2. Let $D_{0}=\emptyset$. Then we construct $\\{D_{j}\\}$ inductively. Suppose we have $D_{0},\cdots,D_{j}$. If $\big{|}B_{r_{k}}^{0}(p)\setminus\cup_{i=1}^{j}D_{i}\big{|}<\alpha_{k}$ for all $p\in M$, then we just let $m=j$. Otherwise, take $p_{j+1}$ such that for all $p\in M$, $\Big{|}B_{r_{k}}^{0}(p_{j+1})\setminus\bigcup_{i=0}^{j}D_{i}\Big{|}\geq\Big{|}B_{r_{k}}^{0}(p)\setminus\bigcup_{i=0}^{j}D_{i}\Big{|}.$ Clearly, $p_{j+1}\notin B_{2r_{k}}^{0}(p_{i})$ for all $i\leq j$ and $|B_{r_{k}}^{0}(p_{j+1})\setminus\cup_{i=1}^{j}D_{i}|_{g}\geq\alpha_{k}$. Note that $B_{4r_{k}}^{0}(p_{j+1})$ is covered by $C(r_{k})$ many balls of radius $r_{k}$. Thus we have (5.1) $\Big{|}B_{4r_{k}}^{0}(p_{j+1})\setminus\bigcup_{i=0}^{j}D_{i}\Big{|}\leq C(r_{k})\Big{|}B_{r_{k}}^{0}(p_{j+1})\setminus\bigcup_{i=0}^{j}D_{i}\Big{|}.$ Since $r_{k}<1$, we have that (5.2) $\big{|}B_{4r_{k}}^{0}(p_{j+1})\big{|}_{g_{0}}\leq C_{0}\cdot(4r_{k})^{n}.$ Then by Proposition 2.3, we can take $D_{j+1}$ satisfying $B_{3r_{k}}^{0}(p_{j+1})\setminus\bigcup_{i=0}^{j}D_{i}\subset D_{j+1}\subset B_{4r_{k}}^{0}(p_{j+1})\setminus\bigcup_{i=0}^{j}D_{i}$ and (5.3) $\displaystyle\Big{|}\partial D_{j+1}\cap\mathrm{Int}(M\setminus\bigcup_{i=0}^{j}D_{i})\Big{|}$ $\displaystyle\leq\frac{1}{r_{k}}\cdot\Big{|}B_{4r_{k}}^{0}(p_{j+1})\setminus\bigcup_{i=0}^{j}D_{i}\Big{|}_{g_{0}}^{\frac{1}{n}}\cdot\Big{|}B_{4r_{k}}^{0}(p_{j+1})\setminus\bigcup_{i=0}^{j}D_{i}\Big{|}^{\frac{n-1}{n}}$ $\displaystyle\leq 4C_{0}C(r_{k})\cdot\Big{|}B_{r_{k}}^{0}(p_{j+1})\setminus\bigcup_{i=0}^{j}D_{i}\Big{|}^{\frac{n-1}{n}}.$ Here in the last inequality, we used (5.1) and (5.2). Then there exists an integer $m\geq 0$ such that after $m$ many steps, we have $\\{D_{j}\\}_{j=1}^{m}$ such that for all $p\in M$, $\Big{|}B_{r_{k}}^{0}(p)\setminus\bigcup_{j=0}^{m}D_{j}\Big{|}<\alpha_{k}.$ This gives that these domains $\\{D_{j}\\}$ satisfy the second item. Now we are going to verify that these domains satisfy the first requirement. From the fact of $|D_{j}|_{g}>\alpha_{k}=1/k$, we conclude that $m\leq k-1$. Recall that $D_{j}\subset B^{0}_{4r_{k}}(p_{j})$. Then we have $|D_{j}|_{g_{0}}\leq|B_{4r_{k}}^{0}(p_{j})|_{g_{0}}\leq C_{0}(4r_{k})^{n}<1.$ Moreover, $\displaystyle\Big{|}\bigcup_{j=1}^{m}\partial D_{j}\Big{|}$ $\displaystyle=\sum_{j=0}^{m-1}\Big{|}\partial D_{j+1}\cap\mathrm{Int}(M\setminus\bigcup_{i=0}^{j}D_{i})\Big{|}$ $\displaystyle\leq 4C_{0}C(r_{k})\sum_{j=0}^{m-1}\Big{|}B_{r_{k}}^{0}(p_{j+1})\setminus\bigcup_{i=0}^{j}D_{i}\Big{|}^{\frac{n-1}{n}}$ $\displaystyle\leq 4C_{0}C(r_{k})\cdot m^{\frac{1}{n}}\cdot\Big{(}\sum_{j=0}^{m-1}\Big{|}B_{r_{k}}^{0}(p_{j+1})\Big{|}\Big{)}^{\frac{n-1}{n}}$ $\displaystyle\leq 4C_{0}C(r_{k})|M|^{\frac{n-1}{n}}\cdot k^{\frac{1}{n}}\leq 4C_{0}C(1)|M|^{\frac{n-1}{n}}\cdot k^{\frac{1}{n}}.$ Here the first inequality is from (5.3); we used the Hölder’s inequality in the second one; the third one follows from the fact of $B_{r_{k}}^{0}(p_{i})\cap B_{r_{k}}^{0}(p_{j})=\emptyset$ for $i\neq j$; for the last one, we used $r_{k}\leq 1$. So far, Claim 2 is proved. ∎ Denote by $D_{m+1}=\overline{M\setminus\cup_{j=1}^{m}D_{j}}$ and $k_{j}=k|D_{j}|/|M|$ for all $1\leq j\leq(m+1)$. Note that $|D_{j}|_{g_{0}}\leq 1$. Then by Theorem 4.2 (using $k=[k_{j}]+1$ and $N=D_{j}$ there), for each $1\leq j\leq m$, there exists a finite cover $\\{\overline{U}_{i}^{j}\\}_{i}$ of $D_{j}$ such that (5.4) $\big{|}\cup_{i}\partial U_{i}^{j}\cap(\mathrm{Int}D_{j})\big{|}+k_{j}\max_{i}\omega_{1}(U_{i}^{j},g)\leq C_{2}|D_{j}|^{\frac{n-1}{n}}(1+[k_{j}])^{\frac{1}{n}}\leq 2C_{2}|D_{j}|^{\frac{n-1}{n}}k_{j}^{\frac{1}{n}},$ which also implies (5.5) $k\max_{i}\omega_{1}(U_{i}^{j},g)\leq\frac{k}{k_{j}}\cdot 2C_{2}\Big{(}\frac{k_{j}}{k}\cdot|M|\Big{)}^{\frac{n-1}{n}}k_{j}^{\frac{1}{n}}=2C_{2}|M|^{\frac{n-1}{n}}k^{\frac{1}{n}}.$ Note that $|B_{r_{k}}^{0}(p)\cap D_{m+1}|\leq\alpha_{k}$ for each $p\in D_{m+1}$. Applying Lemma 3.1 ($\alpha=\alpha_{k}$ and $r=r_{k}$), $D_{m+1}$ can be subdivided into disjoint open sets $\\{V_{j}\\}$ by $\cup_{j=1}^{L}\partial V_{j}$ satisfying the following: (5.6) $\displaystyle\Big{|}\bigcup_{j=1}^{L}\partial V_{j}\cap\mathrm{Int}D_{m+1}\Big{|}\leq(C_{4}/r_{k})\cdot|D_{m+1}|^{\frac{1}{n}}_{g_{0}}\cdot|D_{m+1}|^{\frac{n-1}{n}};$ (5.7) $\displaystyle\omega_{1}(V_{j},g)\leq C_{4}\alpha_{k}^{\frac{n-1}{n}}\ \ \text{ for }\ \ 1\leq j\leq L.$ Here $C_{4}=5C_{0}(K+C(1/2))C(1)>C(r_{k}/2)C(r_{k})+(4C_{0}+1)K\cdot C(r_{k})$. Note that $M$ is covered by $\\{{\overline{D}}_{j}\\}_{j=1}^{m+1}$. Hence $M$ is subdivided into $\cup_{j=1}^{m}\\{U_{i}^{j}\\}_{i}\cup\\{V_{l}\\}_{l=1}^{L}$. Then by Gromov [Gro88] and Guth [Guth09] (see also [GL17]*Proof of Theorem 7.1), $\displaystyle\omega_{k}(M,g)$ $\displaystyle\leq\sum_{j=1}^{m}\sum_{i}\big{|}\partial U_{i}^{j}\cap\mathrm{Int}D_{j}\big{|}+\Big{|}\bigcup_{j=1}^{m}\partial D_{j}\Big{|}+\Big{|}\bigcup_{j=1}^{L}\partial V_{j}\cap\mathrm{Int}D_{m+1}\Big{|}+k\max_{i,j}\omega_{1}(U_{i}^{j},g)+$ $\displaystyle\ \ +k\max_{1\leq j\leq L}\omega_{1}(V_{j},g)$ $\displaystyle\leq\sum_{j=1}^{m}2C_{2}|D_{j}|^{\frac{n-1}{n}}k_{j}^{\frac{1}{n}}+4C_{0}C(1)|M|^{\frac{n-1}{n}}k^{\frac{1}{n}}+(C_{4}/r_{k})\cdot|D_{m+1}|^{\frac{1}{n}}_{g_{0}}\cdot|D_{m+1}|^{\frac{n-1}{n}}+$ $\displaystyle\ \ +2C_{2}|M|^{\frac{n-1}{n}}k^{\frac{1}{n}}+C_{4}\alpha_{k}^{\frac{n-1}{n}}\cdot k$ $\displaystyle\leq 2C_{2}\Big{(}\sum_{j=1}^{m}|D_{j}|\Big{)}^{\frac{n-1}{n}}\Big{(}\sum_{j=1}^{m}k_{j}\Big{)}^{\frac{1}{n}}+4C_{0}C(1)|M|^{\frac{n-1}{n}}k^{\frac{1}{n}}+8C_{0}C(1)C_{4}\cdot|M|^{\frac{n-1}{n}}k^{\frac{1}{n}}+$ $\displaystyle\ \ +(2C_{2}+C_{4})|M|^{\frac{n-1}{n}}k^{\frac{1}{n}}$ $\displaystyle\leq\big{(}4C_{2}+13C_{0}C(1)C_{4}\big{)}|M|_{g}^{\frac{n-1}{n}}k^{\frac{1}{n}}.$ Here the second inequality is from (5.4), 5.6, (5.5) and (5.7) and Claim 2; in the third inequality, we used the Hölder’s inequality for the first item, and the fact $|D_{m+1}|_{g_{0}}\leq|M|_{g_{0}}=|M|_{g}$ for the third item. Then we conclude that for any $k\geq\bar{k}$, (5.8) $\omega_{k}(M,g)\leq C_{3}|M|^{\frac{n-1}{n}}k^{\frac{1}{n}},$ where $C_{3}:=4C_{2}+13C_{0}C(1)C_{4}$. If $\bar{k}=1$, then we are done. Otherwise, it remains to estimate $\omega_{k}(M,g)$ for $k<\bar{k}$. Note that in this case, $\bar{k}\leq 2\frac{|M|}{2C(1)}\leq|M|=|M|_{g_{0}}.$ Then by (5.8), $\omega_{\bar{k}}(M,g)\leq C_{3}|M|^{\frac{n-1}{n}}{\bar{k}}^{\frac{1}{n}}\leq C_{3}|M|^{\frac{n-1}{n}}|M|_{g_{0}}^{\frac{1}{n}}.$ Recall that $\omega_{k}(M,g)\leq\omega_{\bar{k}}(M,g)$ for $1\leq k\leq\bar{k}$. Thus we conclude that for all $k\geq 1$, $\omega_{k}(M,g)\leq C_{3}|M|^{\frac{n-1}{n}}(k^{\frac{1}{n}}+|M|_{g_{0}}^{\frac{1}{n}}).$ ∎ ## Appendix A Proof of Theorem 2.2 ###### Proof of Theorem 2.2. We follows the steps given by Glynn-Adey and Liokumovich in [GL17], where they proved this theorem for $N=M$. Here we give the outline and point out some necessary modifications. Suppose that $N$ has smooth boundary. For any $\epsilon_{0}\in(0,1)$, take $\bar{r}(M,N,\epsilon_{0})$ such that: * • for every $x\in\partial N$, we have that $B_{r}(x)$ is $(1+\epsilon_{0})$-bilipschitz diffeomorphic to the Euclidean ball of radius $r$ and $B_{r}(x)\cap N$ is mapped onto a half-ball under the difformorphism. Denote by $B_{r}^{+}(x)=B_{r}(x)\cap N$; * • the monotonicity formula [GLZ16]*Theorem 3.4 holds. From now on, we fix some $\epsilon_{0}<1$. Step 1: Suppose that $N$ has smooth boundary. There exists $\epsilon=\epsilon(M,N,\bar{r})$ satisfying the following: for any domain $D\subset N$ with $|D|<\epsilon$, there exists a collection of domains $D(=:D_{0})\supset D_{1}\supset D_{2}\supset\cdots\supset D_{m}$ satisfying * • $D_{m}\subset\mathrm{Int}N$; * • $|\partial D_{j}\cap\mathrm{Int}N|\geq|\partial D_{j+1}\cap\mathrm{Int}N|$ for $0\leq j\leq m-1$; * • for $0\leq j\leq m-1$, $D_{j}\setminus D_{j+1}$ is contained in some ball of radius $\bar{r}$ and center $x\in\partial N$; ###### Proof of Step 1. Suppose that $x\in\partial D_{j}\cap\partial N$, now we construct $D_{j+1}\subset D_{j}$. By the co-area formula, we can find $r^{\prime}\in(3\bar{r}/4,\bar{r}/4)$ such that $\partial D_{j}\cap\mathrm{Int}N$ is transverse to $\partial B_{r^{\prime}}(x)$ and $|D_{j}\cap\partial B_{r^{\prime}}(x)|\leq(8/\bar{r})\cdot|D_{j}\cap B_{r^{\prime}}(x)|.$ Denote by $S=\llbracket D_{j}\cap\partial B_{r^{\prime}}(x)\rrbracket$. Let $T$ be the minimizing current $T$ among all $T^{\prime}\in\mathcal{Z}_{n-1}(B^{+}_{r^{\prime}}(x),\partial B^{+}_{r^{\prime}}(x);\mathbb{Z}_{2})$ with $\operatorname{spt}(\partial T^{\prime}-\partial S)\subset\partial N$. Then by the regularity theory [Mor03]*Theorem 4.7 (see also [GLWZ19]*Theorem 4.7), $T$ is induced by a free boundary hypersurface $\Sigma$ with $(n-8)$-dimensional singular set. By taking $\epsilon$ small enough, from the monotonicity formula [GLZ16]*Theorem 3.4, $\Sigma\cap\partial N\cap B_{\bar{r}/2}(x)=\emptyset$. Using the monotonicity formula again, $\Sigma\cap B_{\bar{r}/4}(x)=\emptyset$. Note that by the isoperimetric choice [LZ16], there exists $V\subset B_{\bar{r}}^{+}(x)$ such that $\partial\llbracket V\rrbracket=T-S$ and the volume of $V$ is small. Hence $V$ does not contain $B^{+}_{\bar{r}/4}(x)$. Together with the fact of $\partial V$ does not intersect $B^{+}_{\bar{r}/4}(x)$, we conclude that $V\cap B_{\bar{r}/4}^{+}(x)=\emptyset$. Now we define $D_{j+1}=D_{j}\cap(N\setminus(B^{+}_{\bar{r}}(x)\setminus V)).$ Clearly, $D_{j}\setminus D_{j+1}$ is contained in $B_{\bar{r}}^{+}(x)$. Note that $T$ is minimizing in $B_{\bar{r}}^{+}(x)$. Then it is minimizing in $B_{\bar{r}}^{+}(x)\setminus V$, i.e. $|\Sigma\cap D_{j}|\leq|\partial D_{j}\cap\mathrm{Int}(B^{+}_{\bar{r}}(x)\setminus V)|.$ This implies $|\partial D_{j}\cap\mathrm{Int}N|-|\partial D_{j+1}\cap\mathrm{Int}N|=|\partial D_{j}\cap\mathrm{Int}(B^{+}_{\bar{r}}(x)\setminus V)|-|\Sigma\cap D_{j}|\geq 0.$ Thus Step 1 is completed. ∎ Step 2: Suppose that $N$ has smooth boundary. There exist constants $\beta_{1}=\beta_{1}(n)$ and $\epsilon=\epsilon(M,N,\bar{r})$ such that for any domain $D\subset N$ with $|D|\leq\epsilon$, the following bound holds: (A.1) $\omega_{1}(D,g)\leq\beta_{1}|D|^{\frac{n-1}{n}}+|\partial D\cap\mathrm{Int}N|.$ ###### Proof of Step 2. Let $\\{D_{j}\\}_{j=1}^{m}$ be the domains constructed in Step 1. Then repeating the process inside $N$ (see also [GL17]*Proposition 4.3), there exists $D_{m}\supset D_{m+1}\supset\cdots\supset D_{L}$ such that * • $|\partial D_{j}\cap\mathrm{Int}N|\geq|\partial D_{j+1}\cap\mathrm{Int}N|$ for $m\leq j\leq L-1$; * • for $m\leq j\leq L$, $D_{j}\setminus D_{j+1}$ is contained in some ball of radius $\bar{r}$ and center $x\in N$, where $D_{L+1}=\emptyset$; By [Guth07], there exists $\beta_{1}=\beta_{1}(n)$ such that for $0\leq j\leq L$, (A.2) $\omega_{1}(D_{j}\setminus D_{j+1},g)\leq\beta_{1}|D_{j}\setminus D_{j+1}|^{\frac{n-1}{n}}.$ Now let $\Phi_{j}$ be a sweepout of $D_{j}\setminus D_{j+1}$ having no concentration of mass. Then there exist lifting maps $\widetilde{\Phi}_{j}:[0,1]\rightarrow\mathcal{C}(D_{j}\setminus D_{j+1})$ such that $\partial\circ\widetilde{\Phi}_{j}=\Phi_{j}\ \ \ \text{ for }\ \ \ 0\leq j\leq L.$ Without loss of generality, we assume that $\widetilde{\Phi}_{j}(0)=0$, $\widetilde{\Phi}_{j}(1)=\llbracket D_{j}\setminus D_{j+1}\rrbracket$. By [GL17]*Proposition 2.3, we can construct a sweepout of $D$ as follows: we first define $\widetilde{\Phi}:[0,1]\rightarrow\mathcal{C}(D)$ by $\widetilde{\Phi}(t)=\widetilde{\Phi}_{L-j}\Big{(}(L+1)(t-\frac{j}{L+1})\Big{)}+\llbracket D_{L+1-j}\rrbracket\ \ \text{ for }\ \ \frac{j}{L+1}\leq t\leq\frac{j+1}{L+1}.$ Then $\Phi=\partial\circ\Phi$ is the desired sweepout, which has no concentration of mass. Such a construction gives that $\omega_{1}(D,g)\leq\max_{0\leq j\leq L}\\{\omega_{1}(D_{j}\setminus D_{j+1},g)+|\partial D_{j}\setminus\partial D|\\}.$ Together with (A.2), we have $\omega_{1}(D,g)\leq\beta_{1}|D|^{\frac{n-1}{n}}+|\partial D\cap N|.$ ∎ Step 3: Suppose that $N$ has smooth boundary. There exists $\beta_{2}=\beta_{2}(n)$ such that for any domain $D\subset N$, the following bound holds (A.3) $\omega_{1}(D,g)\leq\beta_{2}\cdot(1+|D|_{g_{0}}^{\frac{1}{n}})|D|^{\frac{n-1}{n}}+2|\partial D\cap\mathrm{Int}N|.$ ###### Proof of Step 3. We use the argument in [GL17]*Theorem 5.1. Let $\epsilon_{1}=25^{-n}\cdot\epsilon$. Take $\beta_{2}(n)=\beta_{1}(n)+3c(n)\cdot\Big{[}1-(1-25^{-n})^{\frac{n-1}{n}}\Big{]}$. Here $c(n)$ is the constant in [GL17]*Lemma 3.4. It follows that (A.4) $\Big{[}1-(1-25^{-n})^{\frac{n-1}{n}}\Big{]}\beta_{2}(n)\geq 3c(n).$ By Step 2, for $k\leq 25^{n}$, (A.3) holds for $D$ with $|D|\leq k\epsilon_{1}$. We proceed by induction on $k$. Suppose the inequality holds for compact domains with volume at most $k\epsilon$. Then for any $D\subset N$ with $k\epsilon_{1}<|D|\leq(k+1)\epsilon_{1}$. By Theorem 4.1, there exists a hypersurface $\Sigma$ subdividing $D$ into $D_{0}$ and $D_{1}$ such that $|D_{j}|\leq(1-25^{-n})|D|$ (for $j=0,1$) and (A.5) $|\Sigma|\leq c(n)|D|^{\frac{n-1}{n}}(1+|D|_{g_{0}}^{\frac{1}{n}}).$ Then using the construction of sweepouts in Step 2, we have (A.6) $\omega_{1}(D,g)\leq\max_{j\in\\{0,1\\}}\\{\omega_{1}(D_{j},g)+|\partial D_{j}\setminus\partial D|\\}.$ Note that for $j=0,1$, $|D_{j}|\leq(1-25^{-n})|D|\leq|D|-\epsilon_{1}<(k+1)\epsilon_{1}-\epsilon_{1}<k\epsilon_{1}.$ Hence by the assumption, $\displaystyle\omega_{1}(D_{j},g)$ $\displaystyle\leq\beta_{2}\cdot(1+|D_{j}|_{g_{0}}^{\frac{1}{n}})|D_{j}|^{\frac{n-1}{n}}+2|\partial D_{j}\cap\mathrm{Int}N|$ $\displaystyle\leq\beta_{2}\cdot(1+|D|_{g_{0}}^{\frac{1}{n}})|D|^{\frac{n-1}{n}}\cdot(1-25^{-n})^{\frac{n-1}{n}}+2|\partial D\cap\mathrm{Int}N|+2|\Sigma|$ $\displaystyle\leq(\beta_{2}-3c(n))(1+|D|_{g_{0}}^{\frac{1}{n}})|D|^{\frac{n-1}{n}}+2|\partial D\cap\mathrm{Int}N|+2|\Sigma|$ $\displaystyle\leq\beta_{2}\cdot(1+|D|_{g_{0}}^{\frac{1}{n}})|D|^{\frac{n-1}{n}}+2|\partial D\cap\mathrm{Int}N|-|\Sigma|.$ Here the third inequality is from (A.4) and we used (A.5) in the last one. Then together with (A.6), we conclude that $\omega_{1}(D,g)\leq\beta_{2}\cdot(1+|D|_{g_{0}}^{\frac{1}{n}})|D|^{\frac{n-1}{n}}+2|\partial D\cap\mathrm{Int}N|.$ This finishes Step 3. ∎ Step 4: We prove the theorem for general compact domain $N$ (having piecewise smooth boundary). ###### Proof of Step 4. Now let $N$ be a compact domain with piecewise smooth boundary. Then we have a tubular neighborhood $U$ of $N$ such that $U$ has smooth boundary and $|U|_{g_{0}}\leq 2|N|_{g_{0}}$ and $|U|\leq 2|N|$. Then by Step 3, $\omega_{1}(U,g)\leq\beta_{2}\cdot(1+|U|_{g_{0}}^{\frac{1}{n}})|U|^{\frac{n-1}{n}}\leq 2\beta_{2}\cdot(1+|N|_{g_{0}}^{\frac{1}{n}})|N|^{\frac{n-1}{n}}.$ Then the desired inequality follows from $\omega_{1}(N,g)\leq\omega_{1}(U,g)$ if we take $K=2\beta_{2}(n)$. ∎ So far, Theorem 2.2 is proved. ∎ ## References
8k
arxiv_papers
2101.01207
# Semantic Video Segmentation for Intracytoplasmic Sperm Injection Procedures Peter He Department of Computing Imperial College London London, UK [email protected] Raksha Jain Faculty of Medicine Imperial College London London, UK Jérôme Chambost Apricity Paris, France Céline Jacques Apricity Paris, France Cristina Hickman Institute of Reproductive and Developmental Biology Imperial College London London, UK ###### Abstract We present the first deep learning model for the analysis of intracytoplasmic sperm injection (ICSI) procedures. Using a dataset of ICSI procedure videos, we train a deep neural network to segment key objects in the videos achieving a mean IoU of 0.962, and to localize the needle tip achieving a mean pixel error of 3.793 pixels at 14 FPS on a single GPU. We further analyze the variation between the dataset’s human annotators and find the model’s performance to be comparable to human experts. ## 1 Introduction Intracytoplasmic sperm injection (ICSI) is an assisted reproductive technology (ART) involving the injection of a single spermatozoon directly into the cytoplasm of a mature oocyte under a microscope. Though used in 70-80% of in- vitro fertilization (IVF) cycles, some technical aspects of the procedure remain controversial or inconclusive [1]. Moreover, despite increasing standardization [1], the success rate of the procedure can vary from operator to operator [2, 3, 4] with clinic success rates in North America ranging from 50-80% [5]. A number of studies attribute these variations (at least in part) to operator technique [2, 3, 4, 6]. A prospective study in [6] found that a modified injection technique led to "adequate" fertilization and pregnancy rates in patients with previous ICSI failures using the conventional technique. A retrospective study of 535 manually-analyzed videos of ICSI procedures in [3] found that a certain type of intracellular needle movement can significantly reduce the likelihood of fertilization, providing a measurable technical performance indicator that could be implemented as part of an ICSI quality control process. In this study, we propose and implement the first deep neural network model for the segmentation of key objects in ICSI procedure videos. The model has applications in not only accelerating the hitherto slow and manual task of analyzing ICSI videos for research, but also in implementing quality control processes in the IVF laboratory and providing trainee embryologists with real- time feedback on their technique. ## 2 Method ### 2.1 Data Preparation Videos of 156 ICSI procedures were obtained from a private clinic across four embryologists on three different ICSI kits. The videos were recorded at 15 FPS and were split between training (130), validation (3) and testing (23) sets. Frames were extracted from each trimmed video once every three frames yielding a dataset of 7983 frames. The frames were labelled with polygons being drawn around the suction pipette and oolemma and a point being placed at the needle tip. Each frame was labelled by one of a team of five operators and validated by a sixth. The frames were then converted to grayscale, resized to $512\times 512$ pixels and processed with contrasted limited adaptive histogram equalization. Figure 1: Model Architecture ### 2.2 Model Architecture We propose a modified nested U-Net architecture based on [7] with multiple heads (as detailed in Figure 1). The encoder takes some $512\times 512$ input and downsamples it to a $64\times 64\times 256$ bottleneck through successive layers of two $3\times 3$ convolutions followed by batch normalization and $2\times 2$ max-pooling. From the bottleneck, the network splits to a decoder branch and a needle localization branch. Each layer of the decoder branch comprises of bilinear upsampling followed by two $3\times 3$ convolutions, concatenation with nested skip pathways, a further two $3\times 3$ convolutions and batch normalization. A final $3\times 3$ convolution attached to the end of the decoder generates segmentation masks $y_{seg}\in[0,1]^{512\times 512\times 2}$. The needle localization module comprises of a $1\times 1$ convolution followed by a softmax layer which generates a normalized heatmap $\mathit{Z}\in[0,1]^{64\times 64}$ for the needle tip position. The normalized heatmap is passed through a differentiable spatial to numerical transform [8] to produce a corresponding pair of coordinates $y_{coords}\in[-1,1]^{2}$. ### 2.3 Model Training The model is trained with the multi-objective loss: $\mathcal{L}_{total}=\mathcal{L}_{seg}+\lambda_{1}\mathcal{L}_{euc}+\lambda_{2}\mathcal{L}_{js}$ where $\lambda_{1},\lambda_{2}\in\mathbb{R}$ are constant weightings; $\mathcal{L}_{seg}$ is the Dice loss between $y_{seg}$ and the ground truth segmentation masks; $\mathcal{L}_{euc}$ is the Euclidean distance between the $y_{coords}$ and the ground truth needle tip coordinates; and $\mathcal{L}_{js}$ is a the Jensen-Shannon divergence between $\mathit{Z}$ and a normal distribution around the ground truth needle tip coordinates as described in [8]. The parameters of the model were learnt using the diffGrad optimizer [9] with batches of size 4 and an initial learning rate of $1\times 10^{-3}$. The training data was augmented with random cropping, rotation, flips, elastic and optical distortion, Gaussian noise and erasing. ## 3 Experiments ### 3.1 Intra & Interoperator Variation In order to understand the variation in labels between labelling operators, each operator was asked to label a smaller dataset of 14 frames randomly selected from the training set. The IoU between segmentation masks as well as the Euclidean distance between the needle tip annotations were calculated for each pairing of operators. Moreover, the variation in labels from a single operator was quantified: the operators were asked to relabel the same dataset of 14 frames an additional four times. Successive rounds of labelling were spaced at least two hours apart and annotations from previous rounds were hidden. The IoU between segmentation masks and the Euclidean distance between the needle tip annotations were calculated for each operator over each pairing of rounds. The results for both experiments are summarized in Table 1. It was determined that there was no statistically significant difference between intra and interoperator performance ($p>.05$). Figure 2: Examples of predicted segmentation masks. The oolemma is highlighted purple; the suction pipette is highlighted grey; the background is highlighted blue; and the predicted needle tip is shown with a yellow dot. Figure 3: Histogram of distances from needle tip predictions to ground truth. Table 1: Mean Operator Performance (with standard deviations in square brackets). | Interoperator | Intraoperator ---|---|--- Oolemma (IoU) | 0.960 [0.016] | 0.959 [0.015] Pipette (IoU) | 0.964 [0.021] | 0.965 [0.013] Needle (pixels) | 3.731 [3.043] | 3.503 [3.230] ### 3.2 Model Evaluation The model was trained for 31 epochs (26 hours) on a single RTX 2080Ti GPU and evaluated on 1000 frames extracted from 23 ICSI videos. The model achieved IoU scores of 0.961 ($\sigma$ = 0.021) and 0.963 ($\sigma$ = 0.064) for the oolemma and pipette classes respectively. The average Euclidean distance from the predicted needle tip to the ground truth location was 3.793 ($\sigma$ = 6.981). A histogram of these distances can be seen in Figure 1. It was determined that there was no significant difference between human interoperator and our model’s performance on any of the classes ($p>.05$). The model is relatively small at 2.6 M parameters and inference time was 70.2 ms per frame (~14 FPS). ## 4 Conclusion & Future Work In this paper, we introduce the first deep neural network model for the segmentation of ICSI videos. The model achieves roughly human-level performance with speed and consistency on a single GPU setting a strong baseline for future research in the area. There is much scope for further work on the model itself. Temporal consistency between predictions for consecutive frames may be improved by conditioning predictions on previous frames or through post-processing techniques (though the latter may not always be possible in real-time settings). Though already suitable for the analysis of ICSI technique, the model may prove useful to a wider audience by recognizing other structures present in the videos. Moreover, in order to increase robustness with respect to different laboratory setups, the training dataset should be expanded to encompass a wider range of equipment across multiple clinics. At a more applied level, the model may be combined with unsupervised techniques to analyze ICSI technique at scale and aid in the discovery of "best practices" in ICSI. ## Broader Impact This work has the potential to positively impact infertile people and others unable to conceive without medical assistance by enabling further research into ICSI which may help them in fulfilling their dreams of having children. However, the research may also generate a negative impact by perhaps enabling further work into automated ICSI procedures which, while obviously having applications in improving efficiency and standardization in IVF laboratories, may be misused in the long run (for example, for eugenics or the raising of an army of clones). Moreover, in the shorter term, errors made by the system when used for processing videos for research may lead to incorrect conclusions being drawn. It is thus important that predictions are validated by a human- in-the-loop and not blindly trusted. ## Acknowledgments and Disclosure of Funding The project would not have been possible without the participation of labelling operators Alyssa Arshad (Faculty of Life Sciences and Medicine, King’s College London), Ryan Patel (Barts and the London School of Medicine and Dentistry, Queen Mary University of London), Sahar Ley (School of Biological and Chemical Sciences, Queen Mary University of London) and Urvi Bihani (Faculty of Medicine, Imperial College London). We also thank Bogdan Surdu for his comments and proofreading. Funding in direct support of this work: none. ## References * [1] Patrizia Rubino, Paola Viganò, Alice Luddi, and Paola Piomboni. The ICSI procedure from past to future: a systematic review of the more controversial aspects. Human Reproduction Update, 22(2):194–227, 11 2015. * [2] Ashley W Tiegs and Richard T Scott. Evaluation of fertilization, usable blastocyst development and sustained implantation rates according to intracytoplasmic sperm injection operator experience. Reproductive BioMedicine Online, 41(1):19 – 27, 2020. * [3] C.E. Daniel, C. Hickman, T. Wilkinson, O. Oliana, D. Gwinnett, G. Trew, and S. Lavery. Maximising success rates by improving ICSI technique: which factors affect outcome? Fertility and Sterility, 104(3), 2015. * [4] Shehua Shen, Amin Khabani, Nancy Klein, and David Battaglia. Statistical analysis of factors affecting fertilization rates and clinical outcome associated with intracytoplasmic sperm injection. Fertility and Sterility, 79(2):355 – 360, 2003. * [5] Z. Lu, X. Zhang, C. Leung, N. Esfandiari, R. F. Casper, and Y. Sun. Robotic ICSI (Intracytoplasmic Sperm Injection). IEEE Transactions on Biomedical Engineering, 58(7):2102–2108, 2011\. * [6] T. Ebner, M. Moser, M. Sommergruber, K. Jesacher, and G. Tews. Complete oocyte activation failure after ICSI can be overcome by a modified injection technique. Human Reproduction, 19(8):1837–1841, 08 2004. * [7] Zongwei Zhou, Md Mahfuzur Rahman Siddiquee, Nima Tajbakhsh, and Jianming Liang. UNet++: Redesigning Skip Connections to Exploit Multiscale Features in Image Segmentation. IEEE Transactions on Medical Imaging, 39(6):1856–1867, 2020. * [8] Aiden Nibali, Zhen He, Stuart Morgan, and Luke Prendergast. Numerical Coordinate Regression with Convolutional Neural Networks. CoRR, abs/1801.07372, 2018. * [9] S. R. Dubey, S. Chakraborty, S. K. Roy, S. Mukherjee, S. K. Singh, and B. B. Chaudhuri. diffgrad: An optimization method for convolutional neural networks. IEEE Transactions on Neural Networks and Learning Systems, pages 1–12, 2019.
2k
arxiv_papers
2101.01209
# A New Method for Simulating Photoprocesses in Astrochemical Models Ella Mullikin Department of Chemistry, Wellesley College, Wellesley, MA 02481, USA Hannah Anderson Department of Chemistry, Wellesley College, Wellesley, MA 02481, USA Natalie O’Hern Department of Chemistry, Wellesley College, Wellesley, MA 02481, USA Megan Farrah Department of Chemistry, Wellesley College, Wellesley, MA 02481, USA Christopher R. Arumainayagam Department of Chemistry, Wellesley College, Wellesley, MA 02481, USA Ewine F. van Dishoeck Leiden Observatory, Leiden University, P.O. Box 9513, NL-2300 RA Leiden, The Netherlands Max-Planck-Institut für extraterrestrische Physik, D-85748 Garching, Germany Perry A. Gerakines Astrochemistry Laboratory, NASA Goddard Space Flight Center, Greenbelt, MD 20771, USA Anton I. Vasyunin Ural Federal University, Ekaterinburg, Russia Visiting Leading Researcher, Engineering Research Institute ’Ventspils International Radio Astronomy Centre’ of Ventspils University of Applied Sciences, Inženieru 101, Ventspils LV-3601, Latvia Liton Majumdar School of Earth and Planetary Sciences, National Institute of Science Education and Research, HBNI, Jatni 752050, Odisha, India Paola Caselli Center for Astrochemical Studies Max Planck Intitute for Extraterrestrial Physics Garching, Germany Christopher N. Shingledecker Center for Astrochemical Studies Max Planck Intitute for Extraterrestrial Physics Garching, Germany Institute for Theoretical Chemistry University of Stuttgart Pfaffenwaldring 55, 70569 Stuttgart, Germany Department of Physics & Astronomy, Benedictine College, Atchison, KS 66002, USA (Received –; Revised –; Accepted –) ###### Abstract We propose a new model for treating solid-phase photoprocesses in interstellar ice analogues. In this approach, photoionization and photoexcitation are included in more detail, and the production of electronically-excited (suprathermal) species is explicitly considered. In addition, we have included non-thermal, non-diffusive chemistry to account for the low-temperature characteristic of cold cores. As an initial test of our method, we have simulated two previous experimental studies involving the UV irradiation of pure solid O2. In contrast to previous solid-state astrochemical model calculations which have used gas-phase photoabsorption cross-sections, we have employed solid-state cross-sections in our calculations. This method allows the model to be tested using well-constrained experiments rather than poorly constrained gas-phase abundances in ISM regions. Our results indicate that inclusion of non-thermal reactions and suprathermal species allows for reproduction of low-temperature solid-phase photoprocessing that simulate interstellar ices within cold ($\sim$ 10 K) dense cores such as TMC-1. astrochemistry —- ISM, molecules —- ISM, molecular processes, photoprocessing, astrochemical modeling ††journal: ApJ††software: MONACO - Vasyunin et al. (2017), MATLAB ## 1 Introduction While gas-phase and surface reactions on bare carbonaceous or silicate dust grains contribute to cosmic chemistry, the energetic processing of cosmic ices within dark, dense molecular clouds via photochemistry (initiated by non- ionizing radiation) and radiation chemistry (initiated by ionizing radiation) is thought to be the dominant mechanism for the interstellar synthesis of prebiotic molecules (see, for example, review: (Arumainayagam et al., 2019)). Rate-equation based modeling treatments of UV-induced condensed-phase photochemistry have been moderately successful in reproducing the abundances of complex organic molecules (COMs) observed toward hot cores/corinos (Shingledecker et al., 2019b; Grassi et al., 2014; McElroy et al., 2013; Garrod, 2013). However, recent detections of several COMs (e.g., methyl formate (HCOOCH3) and dimethyl ether (CH3OCH3)) in cold ($\sim$ 10 K) dense cores (Vastel et al., 2014; Taquet et al., 2017; Scibelli & Shirley, 2020; Bacmann et al., 2012; Jiménez-Serra et al., 2016; Öberg et al., 2010), albeit in smaller abundance than in hot cores, have led to the search for alternative mechanisms for complex molecule production through cold or non-thermal mechanisms (Shingledecker et al., 2018; Vasyunin et al., 2017). A recent radiolysis-related computational study (Shingledecker et al., 2019b) has provided an explanation for the unprecedented observations of chemical synthesis at temperatures as low as 10 K in starless and prestellar cores. In this modified bulk-chemistry method involving radiolysis by cosmic rays, radicals produced within the ice are considered to be trapped and attempt to react with a neighbor with approximately every vibration. In the study described herein, we use this non-diffusive mechanism to revise the treatment of solid-phase photoprocesses in astrochemical models to account for the complex organics observed in cold cores. A recent study by Jin & Garrod (2020) utilizes a non-diffusive rate-based model which demonstrates the dependence of COM production on non-diffusive reactions between radicals and ice species in cold astrochemical environments and achieves considerable success in reproducing observations toward prestellar core L1544. In contrast to that study, the model presented here incorporates (1) the detailed inclusion of photoionization and photoexcitation, and (2) explicit consideration of the production and reaction of electronically excited radicals (suprathermal species). One of the main processing mechanisms of ices in molecular clouds is radiation chemistry, which involves ionization and the production of copious numbers of low-energy ($<$ 15 eV) electrons, which are thought to be the dominant species involved in radiation chemistry (e.g.,(Arumainayagam et al., 2010)). Ionizing radiation present in this environment include MeV to TeV cosmic rays ($\sim$ 85% H+, $\sim$ 13% He2+, $\sim$ 1% heavy bare nuclei, and $\sim$ 1% electrons) and high-energy photons (e.g., vacuum ultraviolet photons with energies higher than $\sim$ 10 eV, extreme ultraviolet, X-ray, and $\gamma$-ray). Whereas high-energy photons contribute to radiation chemistry in dense molecular clouds, low-energy ($<$ 10 eV) photons (e.g., far (deep)-UV (4.1 – 6.2 eV)) initiate photochemistry, a process that does not involve direct ionization, but instead involves reactions of electronically-excited species. The UV interstellar radiation field, consisting of radiation from nearby stars, is extinguished by dust well before reaching the interior of dark, dense molecular clouds where prebiotic molecules are synthesized. However, a local secondary UV field exists to initiate photo-processing of dust grain ices (Prasad & Tarafdar, 1983). Cosmic rays excite gaseous molecular hydrogen, resulting in Lyman and Werner band emission with an estimated flux of $\sim$ 10${}^{3}-$ 104 photons cm-2 s-1 (Gredel et al., 1989; Cruz-Diaz et al., 2014; Shen et al., 2004). Although this spectral distribution includes high-energy (10 to 13.6 eV) photons, over half of the secondary UV field consists of low- energy ($<$ 10 eV) photons capable of photochemistry by exciting condensed species which may then react within the ice. Except during high photon-flux laser experiments which may involve multi-photon processes, photochemistry is subject to the Bunsen-Roscoe law, which states that the photochemical yield is directly proportional to dose, irrespective of dose rate; this law allows for extrapolation from laboratory experiments to real astrochemical predictions, though even these low-flux experiments utilize fluxes much higher than those experienced by ice in dark, dense molecular clouds. Astrochemical models provide a critical link between the fundamental chemical information revealed by laboratory experiments and predictions and observations of chemical abundances in the interstellar medium. Most models utilize a rate-based approach due to the convenience and speed of that method, though Monte Carlo models have been used to more accurately simulate processes such as the catastrophic impact of high energy radiation or multi-layer interactions (Cuppen et al., 2017; Öberg, 2016). Because all such simulations involving reaction networks and rate-equations are extremely sensitive to input parameters, these models generally become more successful as parameters are better constrained by laboratory experiments. Abundances of several COMs in hot cores/corinos are well reproduced by modern rate-equation-based computational models, which include a coupled gas-phase and grain-surface chemistry or three-phase (gas, surface, and bulk) chemistry (Cuppen et al., 2017). Abundances of COMs in cold ($\sim$ 10 K) cores, however, are generally underpredicted. Most astrochemical models that include bulk-phase processes require thermal diffusion before reaction (Cuppen et al., 2017); however, in the low-temperature conditions of starless and prestellar cores, this thermal motion within the bulk ice is not feasible. Several explanations for the formation of COMs in cold dense cores have been proposed, including: (1) photo-processing followed by reactive desorption (Watanabe & Kouchi, 2002; Chuang et al., 2017; Aikawa et al., 2008; Herbst & van Dishoeck, 2009; Jin & Garrod, 2020), (2) gas-phase reactions (Balucani et al., 2015; Codella et al., 2020), and (3) astrophysical-shock-catalyzed chemistry (James et al., 2020), (4) methanol reactive desorption on CO-rich ices (within the catastrophic CO- freeze-out zone of pre-stellar cores), followed by gas-phase chemistry (Vasyunin et al., 2017). The model presented herein instead assumes that reactive species are trapped within the bulk ice, but have the possibility of reacting with neighboring molecules during each vibration. This work is an extension of a previous study (modeling the physicochemical effects of astrochemical O2 and H2O ice-analogue bombardment by energetic protons), which revealed the importance of considering fast non-thermal reactions in these systems (Shingledecker et al., 2019b). In what follows, we apply this assumption to the case of photon irradiation of cosmic-ice analogues. The model, as utilized in this work, includes only photon-initiated ice processing, including both excitation and ionization events. Cations produced via ionization are assumed to quickly recombine with a secondary electron, resulting in the electronically excited parent molecule, which can then dissociate into electronically-excited products. Models such as ours are essential for interpreting planetary and interstellar ice data generated by past, upcoming, and ongoing NASA missions such as Spitzer, Stratospheric Observatory for Infrared Astronomy (SOFIA), and the James Webb Space Telescope (JWST). As a proof of concept, this new model is used to simulate two published laboratory studies that monitored the processing of O2 ice by $<$ 10.8 eV photons. Given that interstellar ice mantles are likely segregated into polar and apolar layers, it is useful to “tune” the model to simulate the photo- processing of a single species accurately, and then these species-specific models may be combined to simulate processing of the layers of more realistic cosmic ice analogues (Tielens et al., 1991; Pontoppidan, 2006; Öberg et al., 2009, 2011). The first study (Gerakines et al., 1996) employed a microwave- discharge hydrogen-flow lamp (MDHL) with a photon flux of $2.2\times 10^{14}$ photons cm-2 s-1. The MDHL spectrum closely reproduces the calculated dark, dense molecular cloud secondary UV spectrum above 115 nm (below 10.8 eV). However, the fraction of Lyman alpha emission in a MDHL spectrum can change significantly based on the experimental settings such as microwave power and gas pressure (Ligterink et al., 2015). In Gerakines et al. (1996), oxygen ices ($\sim$ 100 nm in thickness) were deposited at $\sim$ 10 K inside a vacuum chamber 10-7 Torr, mimicking conditions relevant to those of interstellar ices in cold cores. Two capping layers of argon precluded both contamination and significant desorption from the ices during photon irradiation. The oxygen ice was irradiated with photons for one hour corresponding to a maximum fluence of $7.9\times 10^{17}$ photons cm-2, corresponding to approximately a million years of secondary interstellar UV irradiation. Production of O3 during the irradiation was monitored via the 1043 cm-1 IR feature of O3. In the second experiment (Raut et al., 2011), a pulsed ArF excimer laser (193 nm), defocused using a MgF2 lens, with a flux of $\sim 2.3\times 10^{14}$ photons cm-2 s-1, was used to irradiate O2 ices 80–84 nm thick. The use of 6.4 eV photons precludes radiation chemistry. The maximum fluence was $9.3\times 10^{18}$ photons cm-2. A vacuum chamber with a base pressure of 10-9 Torr and an ice temperature of 22 K simulated interstellar-like conditions. The ozone column density as a function of fluence was monitored via the 1043 cm-1 and 2109 cm-1 IR features of O3. The goal of the present work is to improve the current understanding of ice chemistry initiated by interstellar secondary UV radiation within dark, dense molecular clouds during the starless and prestellar stages well before the formation of protostars and planets. The successful reproduction of experimental results herein indicates that the inclusion of non-thermal reactions and suprathermal species will allow for more accurate modeling of interstellar photoprocessing of ices in cold cores. Additional tuning of the model for other species, such as water, will render the model suitable for predicting cold-core COM abundances attributable to photo-processing of mixed ices. ## 2 Methods ### 2.1 Theory As in Shingledecker & Herbst (2018), the starting point of our proposed model is the assumption that the interaction between a UV photon and some target species, $A$, results in one of the following outcomes: $A\leadsto A^{+}+e^{-}$ (P1) $A\leadsto A^{+}+e^{-}\rightarrow A^{*}\rightarrow B^{*}+C^{*}$ (P2) $A\leadsto A^{*}\rightarrow B+C$ (P3) $A\leadsto A^{*}$ (P4) Here, the curly arrow ($\leadsto$) represents the absorption of a photons, B and C are dissociation products, and * indicates an electronically excited (suprathermal) species. Of the four processes given above, (P1) and (P2) correspond to the photoionization of $A$ to the cation $A^{+}$— followed by the rapid recombination of the charged products in the case of (P2) — and are relevant in solids for $h\nu\gtrapprox 10$ eV (Arumainayagam et al., 2019). Similarly, (P3) and (P4) account for photoexcitation to the excited state $A^{*}$, with (P3) leading to the photodissociation of $A$. One advantage of separating photoionization and photoexcitation is that the former can be enabled or disabled based on the energy of the incident photons. The rate coefficients of photoionization and dissociation processes, $k_{\mathrm{photo}}$, are usually calculated using $k_{\mathrm{photo}}=\int\sigma(\lambda)I(\lambda)d\lambda$ (1) where here, $\sigma(\lambda)$ and $I(\lambda)$ are wavelength-dependent cross- section and photon flux, respectively. This formula can also be expressed as $k_{\mathrm{photo}}=\frac{\int\sigma(\lambda)I(\lambda)d\lambda}{\int I(\lambda)d\lambda}\int I(\lambda)d\lambda=\bar{\sigma}\Phi$ (2) where $\bar{\sigma}$ is the average cross-section, and $\Phi$ is the integrated photon flux. Following Shingledecker & Herbst (2018), we can then express the rates of (P1) – (P4) in the following way: $k_{\mathrm{P1}}=P_{\mathrm{e}}\bar{\sigma}_{\mathrm{ion}}\Phi\delta$ (3) $k_{\mathrm{P2}}=(1-P_{\mathrm{e}})\bar{\sigma}_{\mathrm{ion}}\Phi\delta$ (4) $k_{\mathrm{P3}}=P_{\mathrm{dis}}\bar{\sigma}_{\mathrm{exc}}\Phi\delta$ (5) $k_{\mathrm{P4}}=(1-P_{\mathrm{dis}})\bar{\sigma}_{\mathrm{exc}}\Phi\delta.$ (6) Here, $P_{\mathrm{e}}$ is the electron escape probability (Elkomoss & Magee, 1962), which we assume as a first approximation is equal to zero. A more comprehensive model will need to relax this approximation to account for the effects of low-energy secondary electrons thought to be the primary agents of radiation chemistry. All ionized molecules are assumed to quickly recombine to form an excited molecule which will subsequently dissociate, react, or be quenched. Quenching by the surrounding ice is assumed to be the dominant relaxation mechanism rather than radiative relaxation, and the attempt frequency is used as the first-order rate constant for this process (Shingledecker et al., 2019a). In reality, electronic excitations (excitons) may diffuse from the interior of the ice to the selvedge where they can drive the desorption of species into the gas (Thrower et al., 2011; Marchione et al., 2016). $P_{\mathrm{dis}}$ is the dissociation probability, which is $\sim$ 1 in the gas, but in solids, we assume it to be 0.5 as a first approximation. This value was adjusted to account for spectral characteristics in later simulations. Dissociation products can recombine to reform the parent species, but this recombination is not assumed to occur preferentially to – or is calculated differently than - any other possible chemical reaction with other bulk species the fragments could undergo. The $\delta$ is a fitting factor that was introduced to account for assumptions of the model and absolute uncertainties in experimental data such as photon flux. More explicitly, $\delta$ is sensitive to, e.g., (a) the reactions and photoproducts included in the chemical network, (b) the associated branching fractions or cross sections, as well as (c) the methods for treating the underlying physical processes employed in the code. Thus, reasonable agreement between calculated and experimental data obtained assuming $\delta\approx 1$ for all photoprocesses would suggest that (a), (b), and (c) capture the salient features of a given system. Conversely, shortcomings in (a), (b) or (c) can be compensated for to some degree by adjusting $\delta$ values to yield best agreement with experimental results. For photoprocesses occurring in the bulk of optically thick ices, an extinction factor, $\epsilon$, can be included in Eqs. (3) – (6) to account for the reduced photon flux relative to the ice surface. Because the two experimental studies of interest used optically thin ices, the extinction factor was set equal to 1. ### 2.2 Model Table 1: Reactions comprising the chemical network used to model O2 photo-processing and subsequent chemistry. Photon-Induced Reactions | Non-Photon-Induced Reactions ---|--- $\ce{O}\xrightarrow{ionization}\ce{O}^{+}+\ce{e}^{-}\rightarrow\ce{O}^{*}$ | $\ce{O3}^{*}+\ce{O3}\rightarrow\ce{O2}+\ce{O2}+\ce{O2}$ | $\ce{O2}^{*}+\ce{O}\rightarrow\ce{O3}^{*}$ | $\ce{O}^{*}+\ce{O3}^{*}\rightarrow\ce{O2}+\ce{O2}$ $\ce{O2}\xrightarrow{ionization}\ce{O2}^{+}+\ce{e}^{-}\rightarrow\ce{O2}^{*}\rightarrow\ce{O}^{*}+\ce{O}^{*}$ | $\ce{O}+\ce{O}\rightarrow\ce{O2}^{*}$ | $\ce{O2}^{*}+\ce{O}\rightarrow\ce{O2}+\ce{O}$ | $\ce{O2}^{*}+\ce{O2}^{*}\rightarrow\ce{O2}+\ce{O2}$ $\ce{O3}\xrightarrow{ionization}\ce{O3}^{+}+\ce{e}^{-}\rightarrow\ce{O3}^{*}\rightarrow\ce{O2}^{*}+\ce{O}^{*}$ | $\ce{O}^{*}+\ce{O}\rightarrow\ce{O}+\ce{O}$ | $\ce{O2}^{*}+\ce{O2}\rightarrow\ce{O2}+\ce{O2}$ | $\ce{O2}^{*}+\ce{O3}^{*}\rightarrow\ce{O2}+\ce{O2}+\ce{O}$ $\ce{O}\xrightarrow{excitation}\ce{O}^{*}$ | $\ce{O}^{*}+\ce{O}\rightarrow\ce{O2}^{*}$ | $\ce{O2}^{*}+\ce{O3}\rightarrow\ce{O2}+\ce{O2}+\ce{O}$ | $\ce{O3}^{*}+\ce{O3}^{*}\rightarrow\ce{O2}+\ce{O2}+\ce{O2}$ $\ce{O2}\xrightarrow{excitation}\ce{O2}^{*}$ | $\ce{O}^{*}+\ce{O2}\rightarrow\ce{O3}^{*}$ | $\ce{O}^{*}+\ce{O}^{*}\rightarrow\ce{O}+\ce{O}$ | $\ce{O}+\ce{O3}^{*}\rightarrow\ce{O2}+\ce{O2}$ $\ce{O3}\xrightarrow{excitation}\ce{O3}^{*}$ | $\ce{O}^{*}+\ce{O2}\rightarrow\ce{O}+\ce{O2}$ | $\ce{O}^{*}+\ce{O2}^{*}\rightarrow\ce{O3}^{*}$ | $\ce{O2}+\ce{O3}^{*}\rightarrow\ce{O2}+\ce{O2}+\ce{O}$ $\ce{O2}\xrightarrow{excitation}\ce{O2}^{*}\rightarrow\ce{O}+\ce{O}$ | $\ce{O}+\ce{O3}\rightarrow\ce{O2}+\ce{O2}$ | $\ce{O}^{*}+\ce{O2}^{*}\rightarrow\ce{O}+\ce{O2}$ | $\ce{O}+\ce{O2}\rightarrow\ce{O3}^{*}$ $\ce{O3}\xrightarrow{excitation}\ce{O3}^{*}\rightarrow\ce{O2}+\ce{O}$ | $\ce{O}^{*}+\ce{O3}\rightarrow\ce{O2}+\ce{O2}$ | | In this work, we have utilized the MONACO model (Vasyunin et al., 2017), previously modified by us, to simulate ice radiation chemistry experiments (Shingledecker et al., 2019b). This code, written in Fortran 90, solves a system of coupled differential equations describing the evolution of the abundance of each species in our network. Unlike comparable astrochemical models, the model described herein accounts for electronically-excited suprathermal species produced during photo-processing of ices. Table 1 presents all photon-induced and non-photon-induced (reactions involving products of the initial photo-processing) reactions included in the model network for O2. Reactions occurring in the selvedge, considered to be the top four monolayers of the ice (Vasyunin & Herbst, 2013), are assumed to occur via the Langmuir-Hinshelwood mechanism, and rate coefficients are calculated using the standard formula for diffusive processes. For reactions in the bulk, rate coefficients are calculated using the non-diffusive formula of Shingledecker et al. (2019b), $k_{\mathrm{fast}}=f_{\mathrm{br}}\left[\frac{\nu_{0}^{A}+\nu_{0}^{B}}{{N_{\mathrm{bulk}}}}\right]\mathrm{exp}\left(-\frac{E_{\mathrm{act}}^{AB}}{T_{\mathrm{ice}}}\right),$ (7) where $f_{\mathrm{br}}$ is the branching fraction, $T_{\mathrm{ice}}$ is the ice temperature, $E_{\mathrm{act}}^{AB}$ is the activation energy in Kelvins for reaction, $N_{\mathrm{bulk}}$ is the total number of bulk species in the simulated ice, and $\nu^{A}_{0}$ is the characteristic (hereafter, trial) vibrational frequency (Herbst & Millar, 2008). In our model, species in the selvedge can, in principle, desorb both thermally as well as following exothermic association reactions, with the latter being treated by the method of Garrod (2008) with a standard efficiency of 1%. To mimic the Gerakines experiments where such desorption would be inhibited due to the presence of a capping noble gas layer, thermal, chemical, and photodesorption processes were disabled in our models. The ices studied by Raut and coworkers lacked such a noble gas cap, and thus some amount of desorption would have occurred during the course of the experiment. However, given the fairly low temperature (22 K) of the bulk ice, we assume that thermal desorption is negligible over the timescale of the experiment. Moreover, photodesorption for the Raut et al. experiments was also disabled, since the desorption rate is not well constrained (Fayolle et al., 2013; Bulak et al., 2020) and our focus for this study was, in any case, the chemistry occurring within the bulk ice. Chemical desorption is included for the simulation of the Raut et al. data but was found to have a negligible impact on the bulk chemistry we describe in detail below. Rate constants for photon-induced reactions are dependent on the average cross-sections ($\bar{\sigma}$), photon flux ($\Phi$), and the fitting factor ($\delta$) as described in §2.1. Each product of photoionization or photoexcitation is treated as being trapped in a cage of neighboring bulk species molecules; reactions involving suprathermal species are assumed to be barrierless. Non-photon-induced reactions are assumed to occur non- diffusively, with rates of reaction between any two species being proportional to their abundances in the ice (Shingledecker et al., 2019b). For the pure O2 ice, we use the chemical network (Table 1) initially described in Shingledecker et al. (2017) and used in the microscopic Monte Carlo model, CIRIS, and later modified for use in rate-based kinetic codes in Shingledecker et al. (2019b). The choice of O2 as the bulk species for this initial test of the model is appropriate given the relative simplicity of the products and subsequent possible reactions, especially compared to species such as water or methanol. The selvedge, which comprises the chemically distinct region near the top of the mantle, is considered to be the top four monolayers of the ice (Vasyunin & Herbst, 2013). Parameters relevant to the simulation of laboratory experiments include ice thickness, photon fluence (photon flux multiplied by irradiation time), and photon energy; these values were obtained directly from the manuscripts of the experiments chosen for simulation. The trial frequency, $\nu$, parameterizes the vibrational frequency of a molecule, used as the pre-exponential factor in calculating bulk rate coefficients. The model assumes that with every vibration, there is a probability that a molecule will react with a neighboring molecule (Eq. (7)). For all simulations, the vibrational frequency was set to $1\times 10^{15}$ s-1, which is reasonable, assuming RRKM theory. Increasing the value by orders of magnitude has negligible impact on model simulations, while reducing the value below $1\times 10^{15}$ s-1 resulted in significant deviations from experimental data. Specific to each photoprocess included in the chemical network are cross- sections and the $\delta$ fitting factor. Branching ratios for reactions with more than one product channel were assumed to occur with equal probability in early simulations, and later adjusted to match spectral characteristics. Cross-sections were obtained from various sources, as detailed below. ## 3 Results and Discussion Table 2: Calculated average cross-sections and $\delta$-values for pure O2 ice irradiated by a MDHL and ArF laser. Process | Type | $\bar{\sigma}_{\mathrm{MDHL}}$ (cm2) | $\bar{\sigma}_{\mathrm{ArF}}$ (cm2) | $\delta_{\mathrm{MDHL}}$ | $\delta_{\mathrm{ArF}}$ ---|---|---|---|---|--- $\ce{O}+h\nu\rightarrow\ce{O}^{*}$ | (P2) | 0 | 0 | 1.0 | 1.0 $\ce{O}+h\nu\rightarrow\ce{O}^{*}$ | (P4) | 0 | 0 | 1.0 | 1.0 $\ce{O2}+h\nu\rightarrow\ce{O}^{*}+\ce{O}^{*}$ | (P2) | $3.86\times 10^{-20}$ | $0$ | 1.0 | 1.0 $\ce{O2}+h\nu\rightarrow\ce{O}+\ce{O}$ | (P3) | $2.13\times 10^{-18}$ | $2.10\times 10^{-19}$ | 2.3 | 1.9 - 2.2 $\ce{O2}+h\nu\rightarrow\ce{O2}^{*}$ | (P4) | $2.13\times 10^{-18}$ | $2.10\times 10^{-19}$ | 1.0 | 0.25 - 0.35 $\ce{O3}+h\nu\rightarrow\ce{O2}^{*}+\ce{O}^{*}$ | (P2) | 0 | 0 | 1.0 | 1.0 $\ce{O3}+h\nu\rightarrow\ce{O2}+\ce{O}$ | (P3) | $5.60\times 10^{-18}$ | $2.15\times 10^{-18}$ | 1.0 | 0.25 - 0.35 $\ce{O3}+h\nu\rightarrow\ce{O3}^{*}$ | (P4) | $5.60\times 10^{-18}$ | $2.15\times 10^{-18}$ | 1.0 | 0.25 - 0.35 Note. — the values of $\delta_{\mathrm{MDHL}}$, were used to produce Figure 1. For $\delta_{\mathrm{ArF}}$, ranges of values are shown which were found to yield agreement with the data within experimental error. We directly tested the validity of our new method by replicating experimental data of O2 ice irradiation with UV photons ($<10.8$ eV). In contrast to simulations of poorly constrained gas-phase abundances in ISM regions, this method allows the model to be tested using well-constrained experiments. Simulated pure oxygen ice experiments shared the use of interstellar-like temperatures and pressures. The results of these simulations are described below. ### 3.1 Microwave discharge hydrogen flow lamp source Figure 1: Calculated abundances of O3 vs. photon fluence in a UV-irradiated pure O2 ice (shown in blue), with corresponding experimental data from Gerakines et al. (1996) (shown in red). The values shown in Table 2 were used for branching ratios and cross sections. No error bars were reported in this study Previous work by Gerakines et al. (1996) provides excellent data with which to quantify the validity of our approach. Their experiments on pure O2 were carried out at 10 K. They utilized a microwave discharge hydrogen flow lamp (MDHL), two layers of inert argon, film thicknesses on the order of 0.1 $\mu$m, photon fluxes of $\sim 10^{14}$ photons cm-2 s-1, and an irradiation time of $\sim$ 1 hr (Gerakines et al., 1996; Jenniskens et al., 1993). The O3 production curve given in figure 8 of Gerakines et al. was digitized for comparison to the model output of O3 abundance (Fig. 1). Listed in Table 2 are the effective cross sections, $\bar{\sigma}$, for this experiment. To calculate the effective solid-phase cross section for $\ce{O2}$ photoabsorption, first a reported spectrum of solid-phase O2 absorption (but not cross section) as a function of wavelength was digitized (Lu et al., 2008). Next, this absorption data was scaled to the digitized solid phase cross section data of Mason et al. (2006) in order to obtain cross section values over a broader wavelength range corresponding to the spectrum of a MDHL. The spectrum of a MDHL (Jenniskens et al., 1993) was digitized, and intensity and cross section were multiplied together at each wavelength. Finally, the product was integrated over all wavelengths and divided by total flux (Eq. (2)). The effective solid-phase $\ce{O3}$ photoabsorption cross section was calculated by first scaling gas-phase absorption data from Sivaraman et al. (2014) to gas-phase cross section data from the Leiden database (Heays et al., 2017). Solid-phase absorption data taken in the same laboratory (Sivaraman et al., 2014) were multiplied by the same scaling factor. The resulting solid-phase photoabsorption cross section data as a function of wavelength was used to calculate effective solid-phase cross section. To obtain cross section values for the excited-state reaction path (e.g., (P4)) and the dissociation path (e.g., (P3)), total photoabsorption cross-sections were multiplied by the corresponding branching ratio. Average solid-phase cross sections for $\ce{O2}$ and $\ce{O3}$ photoionization were obtained by using gas phase cross section data from the Leiden database but shifting all data points by 1.5 eV for application to condensed species (Kahn, 2015; Yu et al., 1975). After inserting the experimental parameters of ice thickness, photon flux, photon-irradiation time, and reaction cross sections, the $\delta$ fitting factors given in Table 2 were obtained by manually adjusting to maximize the agreement with experimental data. The optimized steady-state O3 abundance agrees with the experimental value to within 5%. Figure 2: Calculated abundances of O3 vs. photon fluence in a UV- photodissociated pure O2 ice (shown in blue), with corresponding experimental data from Gerakines et al. (1996) (shown in red). For this simulation, the branching ratio of $\ce{O2}$ photodissocation to $\ce{O2}$ photoexcitement was set to 0.99, and all $\delta$-values were set to 1.0. Only the $\delta$ fitting factor for $\ce{O2}$ photodissociation varied from 1.0 for agreement with the experimental data. Given the broad absorption cross-section peak even in the solid-phase spectrum and the high probability of dissociation following photoabsorption for gaseous $\ce{O2}$, a simulation was run with a branching ratio of 99:1 for $\ce{O2}$ dissociation to $\ce{O2}$ excited-state reaction, the results of which are shown in Fig. 2. All $\delta$ values could then be set to 1.0 for similar agreement to experimental data as when the dissociation and excited-state reactions were assumed to be equally likely but a $\delta$ value of 2.3 was required for the dissociation channel. Thus, the original deviation of $\delta$ from 1.0 was necessary to account for the high O2 photodissociation probability when it was not otherwise included in the model. ### 3.2 Pulsed 193 nm ArF excimer laser source Figure 3: Calculated abundances of O3 vs. photon fluence in a UV-irradiated pure O2 ice (shown in blue), with corresponding experimental data from Raut et al. (2011) (shown in red). $\delta$-values used in this simulation: $\ce{O2}+h\nu\rightarrow\ce{O}+\ce{O}$ $[\delta=2.0]$; $\ce{O2}+h\nu\rightarrow\ce{O2}^{*}$ $[\delta=0.3]$; $\ce{O3}+h\nu\rightarrow\ce{O2}+\ce{O}$ $[\delta=0.3]$; $\ce{O3}+h\nu\rightarrow\ce{O3}^{*}$ $[\delta=0.3]$; $\delta=1.0$ for all other processes. A different condensed-phase O2 experiment (Raut et al., 2011), which utilized a pulsed laser UV source, was also simulated to test the validity of our model. These experiments were conducted at 22 K with a film thickness of $\sim$ 80 nm, a photon flux averaging $\sim 10^{14}$ photons cm-2 s-1, and a total fluence of $\sim 10^{19}$ photons cm-2 with 193 nm (6.4 eV) photons. The O3 abundance data in Figure 5 of Raut et al. was digitized for comparison to the model output. Because the O3 abundance was reported as column density, the model output was scaled to account for the thickness of the ice. In this case, cross-sections were provided in the experiment manuscript: the solid phase 193 nm photoabsorption cross section for O2 and O3 are reported as $4.2\times 10^{-19}$ cm2 and $4.3\times 10^{-19}$ cm2, respectively. To confirm these reported cross sections, two independently reported spectra of solid state O2 absorption (but not cross-section) as a function of wavelength were digitized (Cruz-Diaz et al., 2014; Lu et al., 2008). Each included data at 193 nm. Next, a published spectrum of solid-phase O2 cross section as a function of wavelength was also digitized (Mason et al., 2006); this spectrum did not include data at 193 nm. The absorption spectra were then scaled to the cross- section spectrum. The values found in the scaled data at 193 nm matched the cross-section values used by Raut et al. to within 25%. Since Raut et al. included error values in their experimental data, a python script was used to find optimized $\delta$ fitting factors which would maximize agreement with experimental data. “Maximal agreement” was considered as any model output which fell entirely within upper and lower error. Our script iteratively ran the model over a range of $\delta$ values for each process and indicated which combinations of $\delta$ values resulted in model outputs which agreed with the data within experimental error; these are given in Table 2. Because there is a range of outputs which may fall within upper and lower error, there are correspondingly ranges of $\delta$ values for the most influential processes. Simulation results using $\delta$ values within this range are shown in Fig. 3. As displayed in Table 2, the $\delta$ fitting factors, although close to unity, vary somewhat between the simulations of the two experiments. When branching ratios are adjusted from the initial assumption of equal likelihood to more realistic values, all fitting factors could be set to 1.0 for the Gerakines experiment. As noted in §2.1, the $\delta$ fitting factors should be interpreted as effectively accounting for other factors (e.g., absolute uncertainties in the experimental data) not explicitly considered in the code. The fact that all $\delta$ values given in Table 2 are close to unity indicates that the overall contribution of such unknown effects is likely small and are unlikely to be significant sources of uncertainly in astrophysical simulations. In our models, it was found that our O3 abundances were most sensitive to variations in the $\delta$ value for the $\mathrm{O_{2}}\leadsto\mathrm{2O}$ process, thereby revealing the importance of the O + O2 reaction on the overall abundance of ozone. This finding reveals another useful role for the $\delta$ values, namely, that of highlighting key reactions for a given species based on how sensitive the calculated abundance is to variations in the assumed values of $\delta$. While the current model provides reasonable agreement with the findings of the considered experiments, a number of areas could be addressed in future studies that could increase both agreement with empirical data as well as the underlying physical realism of the simulation. As mentioned, the focus of this work has been on processes occurring in the mantles of thin ice films similar to those that coat interstellar dust grains, all of which were optically thin, and thus, we have not considered the effects of extinction that would be of particular importance in optically thick ices. To investigate this effect in more detail, a multi-layer model which more explicitly treats the vertical structure of ice mantles, such as the macroscopic Monte Carlo code described in Vasyunin & Herbst (2013), would be more appropriate. Moreover, given the focus of this study on bulk chemistry, we have not considered photodesorption processes occurring in the top several monolayers of the ice. Current values used in models for these kinds of processes are not well constrained, however, our method of directly simulating laboratory experiments using astrochemical codes represents a promising means by which suitable values could be obtained. Additionally, it is known that the absorption of photons of different energies will change both the efficiency and products of photo-processes (Fayolle et al., 2013, 2011; Fillion et al., 2014). Absent theoretical/experimental cross- sections for photoprocesses as a function of energy, we have as a first approximation assumed that, for example, the photodissociation of O2 produces with equal probability 2O∗ and 2O, but not O∗ \+ O. Finally, because subionization UV photons dominate the MDHL lamp/ArF laser UV, it is likely that the role of low-energy electrons is not significant in this study. To simulate the effects of secondary UV radiation within dark, dense molecular clouds, this model must be modified to include secondary low-energy electron-induced process such as dissociative electron attachment that can occur at electron energies almost as low as 0 eV. ## 4 Conclusions We have simulated the $<$ 10.8 eV UV photodissociation of solid O2 at 10-22 K by a microwave-discharge hydrogen flow lamp and an ArF excimer laser using a rate-based model. Our methodology incorporates: (a) non-diffusive bulk reactions for radicals and other reactive species and (b) a new theoretical method for simulating photoprocesses which, for the first time, distinguishes between photoexcitation and photoionization. We explicitly account for the production and reactivity of short-lived suprathermal photoproducts. In contrast to previous condensed phase astrochemical model calculations that have used gas-phase photoabsorption cross sections, we have employed solid- phase cross sections in our calculations. This method allows the model to be tested using well-constrained experiments rather than poorly constrained gas- phase abundances in regions of the ISM. The semi-quantitative agreement of the model with experimental O3 abundances obtained in two different laboratories indicates that the methodology is promising for simulating interstellar ice photoprocessing. This new computational method, focusing on non-diffusive reactions for radicals and suprathermal species, results in improved agreement with experimental data compared to techniques that rely on bulk thermal radical diffusion, an unlikely mechanism at the exceedingly low temperatures of cold cores. Ultimately it would be fruitful to incorporate these types of rate-based photoprocessing calculations into models that account for atom addition, gas-phase reactions, and cosmic-ray bombardment. Such models, together with observations and laboratory simulations, are necessary for a fundamental understanding of interstellar chemistry which is the likely source of prebiotic molecules in the universe. C.N.S. thanks the Alexander von Humboldt Stiftung/Foundation for their generous support. E.M. gratefully acknowledges funding from the Arnold and Mabel Beckman Foundation. The Massachusetts Space Grant Consortium supported the work of MF. CRA’s work was supported by grants from the National Science Foundation (NSF grant number CHE-1955215), Wellesley College (Faculty Awards and Brachman Hoffman small grants). Work by A.I.V. was supported by the Russian Ministry of Science and Higher Education, Project FEUZ-2020-0038 ## References * Aikawa et al. (2008) Aikawa, Y., Wakelam, V., Garrod, R. T., & Herbst, E. 2008, The Astrophysical Journal, 674, 984, doi: 10.1086/524096 * Arumainayagam et al. (2010) Arumainayagam, C. R., Lee, H.-L., Nelson, R. B., Haines, D. R., & Gunawardane, R. P. 2010, Surface Science Reports, 65, 1, doi: 10.1016/j.surfrep.2009.09.001 * Arumainayagam et al. (2019) Arumainayagam, C. R., Garrod, R. T., Boyer, M. C., et al. 2019, Chemical Society Reviews, 48, 2293, doi: 10.1039/C7CS00443E * Bacmann et al. (2012) Bacmann, A., Taquet, V., Faure, A., Kahane, C., & Ceccarelli, C. 2012, Astronomy & Astrophysics, 541, L12, doi: 10.1051/0004-6361/201219207 * Balucani et al. (2015) Balucani, N., Ceccarelli, C., & Taquet, V. 2015, Monthly Notices of the Royal Astronomical Society, 449, L16, doi: 10.1093/mnrasl/slv009 * Bulak et al. (2020) Bulak, M., Paardekooper, D. M., Fedoseev, G., & Linnartz, H. 2020, Astronomy & Astrophysics, 636, A32, doi: 10.1051/0004-6361/201937298 * Chuang et al. (2017) Chuang, K.-J., Fedoseev, G., Qasim, D., et al. 2017, Monthly Notices of the Royal Astronomical Society, 467, 2552, doi: 10.1093/mnras/stx222 * Codella et al. (2020) Codella, C., Ceccarelli, C., Bianchi, E., et al. 2020, Astronomy & Astrophysics, 635, A17, doi: 10.1051/0004-6361/201936725 * Cruz-Diaz et al. (2014) Cruz-Diaz, G. A., Caro, G. M. M., Chen, Y.-J., & Yih, T.-S. 2014, Astronomy & Astrophysics, 562, A120, doi: 10.1051/0004-6361/201322621 * Cuppen et al. (2017) Cuppen, H. M., Walsh, C., Lamberts, T., et al. 2017, Space Science Reviews, doi: 10.1007/s11214-016-0319-3 * Elkomoss & Magee (1962) Elkomoss, S. G., & Magee, J. L. 1962, The Journal of Chemical Physics, 36, 256, doi: 10.1063/1.1732308 * Fayolle et al. (2011) Fayolle, E. C., Bertin, M., Romanzin, C., et al. 2011, The Astrophysical Journal, 739, L36, doi: 10.1088/2041-8205/739/2/L36 * Fayolle et al. (2013) —. 2013, Astronomy & Astrophysics, 556, A122, doi: 10.1051/0004-6361/201321533 * Fillion et al. (2014) Fillion, J.-H., Fayolle, E. C., Michaut, X., et al. 2014, Faraday Discussions, 168, 533, doi: 10.1039/C3FD00129F * Garrod (2008) Garrod, R. T. 2008, Astronomy and Astrophysics, 491, 239, doi: 10.1051/0004-6361:200810518 * Garrod (2013) —. 2013, The Astrophysical Journal, 765, 60, doi: 10.1088/0004-637X/765/1/60 * Gerakines et al. (1996) Gerakines, P. A., Schutte, W. A., & Ehrenfreund, P. 1996, Astronomy and Astrophysics, 312, 289. http://adsabs.harvard.edu/abs/1996A%26A...312..289G * Grassi et al. (2014) Grassi, T., Bovino, S., Schleicher, D. R. G., et al. 2014, Monthly Notices of the Royal Astronomical Society, 439, 2386, doi: 10.1093/mnras/stu114 * Gredel et al. (1989) Gredel, R., Lepp, S., Dalgarno, A., & Herbst, E. 1989, The Astrophysical Journal, 347, 289, doi: 10.1086/168117 * Heays et al. (2017) Heays, A. N., Bosman, A. D., & Dishoeck, E. F. v. 2017, Astronomy & Astrophysics, 602, A105, doi: 10.1051/0004-6361/201628742 * Herbst & Millar (2008) Herbst, E., & Millar, T. J. 2008, in Low Temperatures and Cold Molecules, ed. I. W. M. Smith (London: Imperial College Press) * Herbst & van Dishoeck (2009) Herbst, E., & van Dishoeck, E. F. 2009, Annual Review of Astronomy and Astrophysics, 47, 427, doi: 10.1146/annurev-astro-082708-101654 * James et al. (2020) James, T. A., Viti, S., Holdship, J., & Jiménez-Serra, I. 2020, Astronomy & Astrophysics, 634, A17, doi: 10.1051/0004-6361/201936536 * Jenniskens et al. (1993) Jenniskens, P., Baratta, G. A., Kouchi, A., et al. 1993, Astronomy and Astrophysics, 273, 583. https://ui.adsabs.harvard.edu/abs/1993A%26A...273..583J/abstract * Jiménez-Serra et al. (2016) Jiménez-Serra, I., Vasyunin, A. I., Caselli, P., et al. 2016, The Astrophysical Journal, 830, L6, doi: 10.3847/2041-8205/830/1/L6 * Jin & Garrod (2020) Jin, M., & Garrod, R. T. 2020, arXiv:2006.11127 [astro-ph]. http://arxiv.org/abs/2006.11127 * Kahn (2015) Kahn, A. 2015, Materials Horizons, 3, 7, doi: 10.1039/C5MH00160A * Ligterink et al. (2015) Ligterink, N. F. W., Paardekooper, D. M., Chuang, K. J., et al. 2015, Astronomy & Astrophysics, 584, A56, doi: 10.1051/0004-6361/201526930 * Lu et al. (2008) Lu, H.-C., Chen, H.-K., Cheng, B.-M., & Ogilvie, J. 2008, Spectrochimica Acta Part A: Molecular and Biomolecular Spectroscopy, 71, 1485, doi: 10.1016/j.saa.2008.05.007 * Marchione et al. (2016) Marchione, D., Thrower, J. D., & McCoustra, M. R. S. 2016, Physical Chemistry Chemical Physics, 18, 4026, doi: 10.1039/C5CP06537B * Mason et al. (2006) Mason, N. J., Dawes, A., Holtom, P. D., et al. 2006, Faraday Discussions, 133, 311, doi: 10.1039/B518088K * McElroy et al. (2013) McElroy, D., Walsh, C., Markwick, A. J., et al. 2013, Astronomy & Astrophysics, 550, A36, doi: 10.1051/0004-6361/201220465 * Öberg (2016) Öberg, K. I. 2016, Chemical Reviews, 116, 9631, doi: 10.1021/acs.chemrev.5b00694 * Öberg et al. (2011) Öberg, K. I., Boogert, A. C. A., Pontoppidan, K. M., et al. 2011, The Astrophysical Journal, 740, 109, doi: 10.1088/0004-637X/740/2/109 * Öberg et al. (2009) Öberg, K. I., Fayolle, E. C., Cuppen, H. M., van Dishoeck, E. F., & Linnartz, H. 2009, Astronomy and Astrophysics, 505, 183, doi: 10.1051/0004-6361/200912464 * Öberg et al. (2010) Öberg, K. I., Qi, C., Fogel, J. K. J., et al. 2010, The Astrophysical Journal, 720, 480, doi: 10.1088/0004-637X/720/1/480 * Pontoppidan (2006) Pontoppidan, K. M. 2006, Astronomy & Astrophysics, 453, L47, doi: 10.1051/0004-6361:20065569 * Prasad & Tarafdar (1983) Prasad, S. S., & Tarafdar, S. P. 1983, The Astrophysical Journal, 267, 603, doi: 10.1086/160896 * Raut et al. (2011) Raut, U., Loeffler, M. J., Famá, M., & Baragiola, R. A. 2011, The Journal of Chemical Physics, 134, 194501, doi: 10.1063/1.3589201 * Scibelli & Shirley (2020) Scibelli, S., & Shirley, Y. 2020, The Astrophysical Journal, 891, 73, doi: 10.3847/1538-4357/ab7375 * Shen et al. (2004) Shen, C. J., Greenberg, J. M., Schutte, W. A., & van Dishoeck, E. F. 2004, Astronomy and Astrophysics, 415, 203, doi: 10.1051/0004-6361:20031669 * Shingledecker et al. (2017) Shingledecker, C. N., Gal, R. L., & Herbst, E. 2017, Physical Chemistry Chemical Physics, 19, 11043, doi: 10.1039/C7CP01472D * Shingledecker & Herbst (2018) Shingledecker, C. N., & Herbst, E. 2018, Physical Chemistry Chemical Physics, 20, 5359, doi: 10.1039/C7CP05901A * Shingledecker et al. (2019a) Shingledecker, C. N., Álvarez Barcia, S., Korn, V. H., & Kästner, J. 2019a, The Astrophysical Journal, 878, 80, doi: 10.3847/1538-4357/ab1d4a * Shingledecker et al. (2018) Shingledecker, C. N., Tennis, J., Gal, R. L., & Herbst, E. 2018, The Astrophysical Journal, 861, 20, doi: 10.3847/1538-4357/aac5ee * Shingledecker et al. (2019b) Shingledecker, C. N., Vasyunin, A., Herbst, E., & Caselli, P. 2019b, The Astrophysical Journal, 876, 140, doi: 10.3847/1538-4357/ab16d5 * Sivaraman et al. (2014) Sivaraman, B., Nair, B. G., Raja Sekhar, B. N., et al. 2014, Chemical Physics Letters, 603, 33, doi: 10.1016/j.cplett.2014.04.021 * Taquet et al. (2017) Taquet, V., Wirström, E. S., Charnley, S. B., et al. 2017, Astronomy & Astrophysics, 607, A20, doi: 10.1051/0004-6361/201630023 * Thrower et al. (2011) Thrower, J. D., Collings, M. P., Rutten, F. J. M., & McCoustra, M. R. S. 2011, Chemical Physics Letters, 505, 106, doi: 10.1016/j.cplett.2011.02.029 * Tielens et al. (1991) Tielens, A. G. G. M., Tokunaga, A. T., Geballe, T. R., & Baas, F. 1991, The Astrophysical Journal, 381, 181, doi: 10.1086/170640 * Vastel et al. (2014) Vastel, C., Ceccarelli, C., Lefloch, B., & Bachiller, R. 2014, The Astrophysical Journal, 795, L2, doi: 10.1088/2041-8205/795/1/L2 * Vasyunin et al. (2017) Vasyunin, A. I., Caselli, P., Dulieu, F., & Jiménez-Serra, I. 2017, The Astrophysical Journal, 842, 33, doi: 10.3847/1538-4357/aa72ec * Vasyunin & Herbst (2013) Vasyunin, A. I., & Herbst, E. 2013, The Astrophysical Journal, 762, 86, doi: 10.1088/0004-637X/762/2/86 * Watanabe & Kouchi (2002) Watanabe, N., & Kouchi, A. 2002, The Astrophysical Journal Letters, 571, L173, doi: 10.1086/341412 * Yu et al. (1975) Yu, K. Y., McMenamin, J. C., & Spicer, W. E. 1975, Surface Science, 50, 149, doi: 10.1016/0039-6028(75)90179-X
8k
arxiv_papers
2101.01210
# The hot spots conjecture can be false: Some numerical examples Andreas Kleefeld1111Author to whom any correspondence should be addressed. 1 Forschungszentrum Jülich GmbH, Jülich Supercomputing Centre, 52425 Jülich, Germany [email protected] ###### Abstract The hot spots conjecture is only known to be true for special geometries. It can be shown numerically that the hot spots conjecture can fail to be true for easy to construct bounded domains with one hole. The underlying eigenvalue problem for the Laplace equation with Neumann boundary condition is solved with boundary integral equations yielding a non-linear eigenvalue problem. Its discretization via the boundary element collocation method in combination with the algorithm by Beyn yields highly accurate results both for the first non- zero eigenvalue and its corresponding eigenfunction which is due to superconvergence. Additionally, it can be shown numerically that the ratio between the maximal/minimal value inside the domain and its maximal/minimal value on the boundary can be larger than $1+10^{-3}$. Finally, numerical examples for easy to construct domains with up to five holes are provided which fail the hot spots conjecture as well. ###### ams: 35J25, 35P20, 65F15, 65M38, 78A46 ††: arXiv * 4 January 2021 Keywords: interior Neumann eigenvalues, Helmholtz equation, potential theory, boundary integral equations, numerics ## 1 Introduction The hot spots conjecture has been given in 1974 by Jeffrey Rauch [32] and explicitly stated a decade later in Kawohl [19]. Refer also to the paper by Bañuelos & Burdzy [6] from 1999. Since then a lot of researchers have worked on this challenging problem (see Judge & Mondal [18] for a recent overview from 2020). Before stating it in mathematical terms, we explain it in a simple fashion. Imagine that we have a flat piece of metal $D$ where $D$ is a bounded subset of the two-dimensional space (Euclidean domain) which can have holes with a sufficiently smooth boundary. Next, an (almost) arbitrary initial temperature distribution is provided on $D$ (refer to [6, p. 2]). Assume that the domain is insulated, then the hottest and coldest spot of $D$ will appear on the boundary when waiting for a long time. Now, we go into the mathematical detail: That means we have to solve the heat equation $\partial_{t}u=\Delta u$ for $t\rightarrow\infty$ with homogeneous Neumann boundary condition $\partial_{\nu}u=0$ and ‘almost’ arbitrary initial condition in an open connected bounded $D$ with Lipschitz boundary for its equilibrium (see [6, p. 2] and [34] for the definition of a Lipschitz domain). Refer to Figure 1 for an example. (a) Solution at time $t_{1}=1/200$ (b) Solution at time $t_{2}=1/10$ (c) Solution at time $t_{3}=1/2$ (d) Solution at time $t_{4}=2$ Figure 1: The numerical solution of the heat equation $\partial_{t}u=\frac{1}{10}\Delta u$ and initial condition with standard normal random numbers and homogeneous Neumann boundary condition for time $t_{1}=1/200$, $t_{2}=1/10$, $t_{3}=1/2$, and $t_{4}=2$ using $h=1/100$ and $k=1/100$ (see also [2, p. 11] for more details on the implementation of the exponential time differencing method). The maximal and minimal value for $T_{4}$ appear on the boundary. Note that the solution $T_{4}$ is approximately representing the first non-zero Neumann eigenfunction (see also Figure 9). Precisely, we have to find the smallest non-trivial eigenvalue of the Laplacian with homogeneous Neumann boundary condition and the corresponding eigenfunction. Note that the smallest eigenvalue of the Laplacian with homogeneous Neumann boundary condition is zero with corresponding eigenfunction $u_{0}=\mathrm{const}$. Mathematically, we have to find a solution $u\neq 0$ and the smallest $k\in\mathbb{R}_{>0}$ such that the Helmholtz equation $\Delta u+k^{2}u=0$ in $D$ with $\partial_{\nu}u=0$ on the boundary $\Gamma$. All solutions $k$ are called non-trivial interior Neumann eigenvalues and $\lambda_{i}=k^{2}_{i}$ will be the $i$-th non-trivial Neumann eigenvalue of the Laplacian. Its corresponding eigenfunctions are denoted by $u_{i}$. Further, it is known that the eigenvalues satisfy $0=\lambda_{0}<\lambda_{1}\leq\lambda_{2}\leq\lambda_{3}\leq\ldots$ when $D$ is a bounded planar domain with Lipschitz boundary (see for example [16, p. 449] and the references therein, specifically [33]). If $\lambda_{1}<\lambda_{2}$ and $\left\langle u(0,x),u_{1}\right\rangle\neq 0$, then $u(t,x)=\mathrm{e}^{\lambda_{1}t}\left\langle u(0,x),u_{1}\right\rangle+$ lower terms. Note that the first non-trivial eigenvalue can have multiplicity more than one which means that there can be more than one eigenfunction. Now, the conjecture can be stated as (refer also to [6, p. 2]): Let $D\subset\mathbb{R}^{2}$ be an open connected bounded domain with Lipschitz boundary $\Gamma$. Then: * C1: For each eigenfunction $u_{2}(x)$ corresponding to $\lambda_{2}$ which is not identically zero, we have $\inf_{x\in\Gamma}u_{2}(x)<u_{2}(y)<\sup_{x\in\Gamma}u_{2}(x)\quad\forall y\in D\,.$ * C2: For each eigenfunction $u_{2}(x)$ corresponding to $\lambda_{2}$ which is not identically zero, we have $\inf_{x\in\Gamma}u_{2}(x)\leq u_{2}(y)\leq\sup_{x\in\Gamma}u_{2}(x)\quad\forall y\in D\,.$ * C3: There exist an eigenfunction $u_{2}(x)$ corresponding to $\lambda_{2}$ which is not identically zero, such that $\inf_{x\in\Gamma}u_{2}(x)\leq u_{2}(y)\leq\sup_{x\in\Gamma}u_{2}(x)\quad\forall y\in D\,.$ Here, C1 is the original conjecture of Rauch. The hypothesis has been shown to be true for some special geometries such as parallelepipeds, balls, rectangles, cylinders [19], obtuse triangles [6], some convex and non-convex domains with symmetry [6], wedges [3], lip domains [4], convex domains with two axes of symmetry [17], convex $C^{1,\alpha}$ domains ($0<\alpha<1$) with one axis of symmetry [31], a certain class of planar convex domains [29], subequilateral isosceles triangles [30], a certain class of acute triangles [37], Euclidean triangles [18], and strips on two-dimensional Riemannian manifolds [25]. It is assumed that the hot spots conjecture is true for arbitrary convex domains, but a proof is still open. The hot spots conjecture is assumed to be true also for simply-connected bounded non-convex domains, but no successful attempts (neither theoretically nor numerically) have been made to prove this conjecture or to find a counterexample. It has been shown that for some domains with one hole that the hot spots conjecture is true (for example an annulus [19]), but that there are also domains with one or more holes where the hot spots conjecture is false (see Burdzy [9], Burdzy & Werner [10], and Bass & Burdzy [7], respectively). For domains on manifolds, we refer the reader to [13]. However, the proofs in [9, 10, 7] are very technical and are based on stochastic arguments. No numerical results support their counterexamples, since their domains are too complicated for being constructed. To be precise, they are very thin and have a polygonal structure. Further, the first non- trivial Neumann eigenvalue is assumed to be simple. If this is not the case, the proof collapses. The only non-published numerical results given so far are for triangles in the PolyMath project 7 ‘Hot spots conjecture’ from 2012 to 2013 using the finite element method. Further work in the direction of better understanding the conjecture is given for example by Steinerberger [39]. Related results on graphs are given by Lederman & Steinerberger [27]. ## Contribution It is the goal of this paper to construct ‘simple’ domains with one hole and show numerically with high precision (due to superconvergence) that those domains do not satisfy the hot spots conjecture. The method based on boundary integral equations is very efficient and its convergence is faster than expected. Additionally, we show the influence on the location of the hot spots by changing the boundary of the domain in order to understand this connection. It is believed that this might help researchers to provide assumptions when the hot spots conjecture will be true or false for arbitrary bounded simply- connected domains which are not necessarily convex. We show that it is possible to construct domains with one hole such that the ratio between the maximum/minimum in the interior and its maximum/minimum on the boundary is larger than $1+10^{-3}$. Finally, numerical results are given that show that there exist domains with up to five holes which do not satisfy the hot spots conjecture as well. The Matlab programs including the produced data are available at github https://github.com/kleefeld80/hotspots and can be used by any researcher trying their own geometries and to reproduce the numerical results within this article. ## Outline of the paper In Section 2, we explain the algorithm in order to compute the first non-zero Neumann Laplace eigenvalue and its corresponding eigenfunction for an arbitrary domain with or without a hole using boundary integral equations resulting in a non-linear eigenvalue problem. Further, it is shown in detail how to discretize the boundary integral equations via the boundary element collocation method and how to numerically solve the non-linear eigenvalue problem. Extensive numerical results are provided in Section 3 showing the superconvergence and highly accurate results for domains with one or no hole. Domains are provided that show the failure of the hot spots conjecture and further interesting results. The extension to domains with up to five holes is straightforward and given at the end of this section as well. A short summary and outlook is given in Section 4. ## 2 The algorithm In this section, we explain the algorithm to compute numerically non-trivial interior Neumann eigenvalues and its corresponding eigenfunction to high accuracy for bounded domains with one hole (the extension to more than one hole is straightforward) very efficiently. The ingredients are boundary integral equations and its approximation via boundary element collocation method; that is, a two-dimensional problem is reduced to a one-dimensional problem. The resulting non-linear eigenvalue problem is solved using complex- valued contour integrals integrating over the resolvent reducing the non- linear eigenvalue problem to a linear eigenvalue problem which is possible due to Keldysh’s theorem (see Beyn [8]). ### 2.1 Notations We consider a bounded Lipschitz domain $D\subset\mathbb{R}^{2}$ with one hole. The outer boundary $\Gamma_{1}$ is assumed to be sufficiently smooth that is oriented counter-clockwise and a sufficiently smooth inner boundary $\Gamma_{2}$ that is oriented clockwise. The normal $\nu_{1}$ on the boundary $\Gamma_{1}$ is pointing into the unbounded exterior $E$. The normal $\nu_{2}$ on the boundary $\Gamma_{2}$ is pointing into the bounded exterior $I$. We refer the reader to Figure 2. The boundary of $D$ is given by $\Gamma=\Gamma_{1}\cup\Gamma_{2}$ ($\Gamma_{1}\cap\Gamma_{2}\neq\emptyset$). $D$$E$$\Gamma_{1}$$\nu_{1}$$I$$\Gamma_{2}$$\nu_{2}$ Figure 2: Used notations for a bounded domain with one hole. Note that we also consider bounded domains without a hole. In this case, we have $\Gamma_{2}=\emptyset$ and hence $\Gamma=\Gamma_{1}$ and $I=\emptyset$. ### 2.2 Boundary integral equation The solution to the Helmholtz equation $\Delta u+k^{2}u=0$ (reduced wave equation) in the domain $D$ for a given wave number $k$ with $\mathrm{Im}(k)\geq 0$ is given by (see [12, Theorem 2.1]) $\displaystyle u(x)=\int_{\Gamma}\partial_{\nu(y)}u(y)\cdotp\Phi_{k}(x,y)-u(y)\cdotp\partial_{\nu(y)}\Phi_{k}(x,y)\,\mathrm{d}s(y)\,,\quad x\in D$ which can be written in our notation as $\displaystyle u(x)$ $\displaystyle=$ $\displaystyle\int_{\Gamma_{1}}\partial_{\nu_{1}(y)}u(y)\cdotp\Phi_{k}(x,y)-u(y)\cdotp\partial_{\nu_{1}(y)}\Phi_{k}(x,y)\,\mathrm{d}s(y)$ (1) $\displaystyle+$ $\displaystyle\int_{\Gamma_{2}}\partial_{\nu_{2}(y)}u(y)\cdotp\Phi_{k}(x,y)-u(y)\cdotp\partial_{\nu_{2}(y)}\Phi_{k}(x,y)\,\mathrm{d}s(y)\,,\quad x\in D$ where $\Phi_{k}(x,y)=\mathrm{i}H_{0}^{(1)}(k\|x-y\|)/4$, $x\neq y$ denotes the fundamental solution of the Helmholtz equation in two dimensions (see [12, p. 66]). Here, $H_{0}^{(1)}$ denotes the first-kind Hankel function of order zero. We denote $u(y)$ for $y\in\Gamma_{1}$ as $u_{1}(y)$ and similarly $u(y)$ for $y\in\Gamma_{2}$ as $u_{2}(y)$. Hence, we can write (1) as $\displaystyle u(x)$ $\displaystyle=$ $\displaystyle-\int_{\Gamma_{1}}u_{1}(y)\cdotp\partial_{\nu_{1}(y)}\Phi_{k}(x,y)\,\mathrm{d}s(y)$ (2) $\displaystyle-\int_{\Gamma_{2}}u_{2}(y)\cdotp\partial_{\nu_{2}(y)}\Phi_{k}(x,y)\,\mathrm{d}s(y)\,,\quad x\in D$ where we also used the homogeneous Neumann boundary conditions $\partial_{\nu_{1}}u=0$ and $\partial_{\nu_{2}}u=0$. We rewrite (2) as $\displaystyle u(x)=-\mathrm{DL}_{k}^{\Gamma_{1}}u_{1}(x)-\mathrm{DL}_{k}^{\Gamma_{2}}u_{2}(x)\,,\quad x\in D$ (3) where we used the notation $\mathrm{DL}_{k}^{\Gamma_{i}}\psi_{i}(x)=\int_{\Gamma_{i}}\psi_{i}(y)\cdotp\partial_{\nu_{i}(y)}\Phi_{k}(x,y)\,\mathrm{d}s(y)\,,\quad x\in D\,,\quad i=1,2$ for the acoustic double layer potential with density $\psi_{i}$ (see [12, p. 39]). Assume for a moment that $k$ is given. The functions $u_{1}$ and $u_{2}$ are still unknown. Once we know them, we can compute the solution $u$ inside the domain of $D$ at any point we want using (3). Now, we explain how to obtain those functions $u_{1}$ and $u_{2}$ on the boundary. Letting $x\in D$ approach the boundary $\Gamma_{1}$ and using the jump relation of the acoustic double layer operator (see [12, p. 39] for the smooth boundary case, otherwise [34]), yields the boundary integral equation $\displaystyle u_{1}(x)=-\left(\mathrm{D}_{k}^{\Gamma_{1}\rightarrow\Gamma_{1}}u_{1}(x)-\left(1-\Omega_{1}(x)\right)u_{1}(x)\right)-\mathrm{D}_{k}^{\Gamma_{2}\rightarrow\Gamma_{1}}u_{2}(x)\,,\quad x\in\Gamma_{1}$ (4) where we used the notation $\mathrm{D}_{k}^{\Gamma_{i}\rightarrow\Gamma_{j}}\psi_{i}(x)=\int_{\Gamma_{i}}\psi_{i}(y)\cdotp\partial_{\nu_{i}(y)}\Phi_{k}(x,y)\,\mathrm{d}s(y)\,,\quad x\in\Gamma_{j}\,,\quad i,j=1,2$ for the double layer operator (see [12, p. 41]). Here, $\Omega_{1}(x)$ denotes the interior solid angle at a point $x$ on $\Gamma_{1}$. When the boundary is smooth at this point, then $\Omega_{1}(x)=1/2$. In fact, $\Omega_{1}(x)$ is $1/2$ almost everywhere for Lipschitz domains. Similarly, we obtain for $x\in D$ approaching the boundary $\Gamma_{2}$ and using the jump relation for the double layer operator $\displaystyle u_{2}(x)=-\mathrm{D}_{k}^{\Gamma_{1}\rightarrow\Gamma_{2}}u_{1}(x)-\left(\mathrm{D}_{k}^{\Gamma_{2}\rightarrow\Gamma_{2}}u_{2}(x)-\left(1-\Omega_{2}(x)\right)u_{2}(x)\right)\,,\quad x\in\Gamma_{2}\,.$ (5) We can rewrite (4) and (5) as a $2\times 2$ system of boundary integral equations in the form $\displaystyle\underbrace{\left(\overbrace{\left(\begin{matrix}\Omega_{1}&0\\\ 0&\Omega_{2}\end{matrix}\right)}^{\mathrm{C}}\overbrace{\left(\begin{matrix}\mathrm{I}&0\\\ 0&\mathrm{I}\end{matrix}\right)}^{\mathrm{I}_{\mathrm{B}}}+\overbrace{\left(\begin{matrix}\mathrm{D}_{k}^{\Gamma_{1}\rightarrow\Gamma_{1}}&\mathrm{D}_{k}^{\Gamma_{2}\rightarrow\Gamma_{1}}\\\ \mathrm{D}_{k}^{\Gamma_{1}\rightarrow\Gamma_{2}}&\mathrm{D}_{k}^{\Gamma_{2}\rightarrow\Gamma_{2}}\end{matrix}\right)}^{\mathrm{K}(k)}\right)}_{\mathrm{M}(k)}\underbrace{\left(\begin{matrix}u_{1}\\\ u_{2}\end{matrix}\right)}_{u}=\left(\begin{matrix}0\\\ 0\end{matrix}\right)\quad\text{on }\Gamma$ (6) where $\mathrm{I}$ and $\mathrm{I}_{\mathrm{B}}$ denotes the identity and the $2\times 2$ block identity operator, respectively. Hence, we have to numerically solve the non-linear eigenvalue problem (6) written as $\mathrm{M}(k)u=0$ to find the smallest non-trivial (real) eigenvalue $k$ and the corresponding eigenfunction $u$. Then, we can numerically evaluate (3) to compute the eigenfunction at any point in the interior we want. As in [21, p. 188], we can argue that the compact operator $\mathrm{K}(k)$ maps from $\mathcal{H}^{-1/2}(\Gamma_{1})\times\mathcal{H}^{-1/2}(\Gamma_{2})$ to $\mathcal{H}^{1/2}(\Gamma_{1})\times\mathcal{H}^{1/2}(\Gamma_{2})$. Here, $\mathcal{H}^{s}(\Gamma)$ denotes a Sobolev space of order $s\in\mathbb{R}$ on the domain $\Gamma$ which are defined via Bessel potentials (see [28, pp. 75–76] for more details). The operator $\mathrm{M}(k)=\mathrm{C}\cdotp\mathrm{I}_{\mathrm{B}}+\mathrm{K}(k)$ is Fredholm of index zero for $k\in\mathbb{C}\backslash\mathbb{R}_{\leq 0}$ and therefore the theory of eigenvalue problems for holomorphic Fredholm operator- valued functions applies to $\mathrm{M}(k)$. ### 2.3 Discretization In this section, we explain how to discretize (6) using quadratic interpolation of the boundary, but using piecewise quadratic interpolation with $\alpha=(1-\sqrt{3/5})/2$ (see [22] for the 3D case) instead of quadratic interpolation for the unknown $u$ on each of the $n_{f}$ boundary elements which ultimately leads to the non-linear eigenvalue $\displaystyle\mathbf{M}(k)\vec{u}=\vec{0}$ (7) where the matrix is of size $3\cdotp 2\cdotp n_{f}\times 3\cdotp 2\cdotp n_{f}$. The size of the matrix is slightly larger than the one given in Kleefeld [21], but it has the advantage that no singular integral has to be evaluated numerically, since we can use a similar singularity subtraction technique as explained in Kleefeld and Lin [23, pp. A1720–A1721] and the convergence rate is slightly higher. The details are about to follow for a domain without a hole for simplicity. In this case, we have to solve a boundary integral equation of the second kind of the form $\displaystyle\Omega_{1}(x)u_{1}(x)+\int_{\Gamma_{1}}u_{1}(y)\partial_{\nu_{1}(y)}\Phi_{k}(x,y)\,\mathrm{d}s(y)=0\,,\quad x\in\Gamma_{1}\,.$ (8) First, we subdivide the boundary $\Gamma_{1}$ into $n_{f}$ pieces denoted by $\Delta_{j}$ with $j=1,\ldots,n_{f}$. A subdivision into four pieces is shown in Figure 3 for the unit circle. $\Delta_{1}$$\Delta_{2}$$\Delta_{3}$$\Delta_{4}$ Figure 3: Subdivision of the given boundary $\Gamma_{1}$ into four pieces $\Delta_{1}$, $\Delta_{2}$, $\Delta_{3}$, and $\Delta_{4}$. Then equation (8) can be equivalently written as $\displaystyle\Omega_{1}(x)u_{1}(x)+\sum_{j=1}^{n_{f}}\int_{\Delta_{j}}u_{1}(y)\partial_{\nu_{1}(y)}\Phi_{k}(x,y)\,\mathrm{d}s(y)=0\,,\quad x\in\Gamma_{1}\,.$ For each $j$ there exists a unique map $m_{j}$ which maps from the standard interval $\sigma=[0,1]$ to $\Delta_{j}$. Then, we can apply a simple change of variables to each integral over $\Delta_{j}$ giving $\displaystyle\Omega_{1}(x)u_{1}(x)$ $\displaystyle+$ $\displaystyle\sum_{j=1}^{n_{f}}\int_{\sigma}u_{1}(m_{j}(s))\partial_{\nu_{1}(m_{j}(s))}\Phi_{k}(x,m_{j}(s))J(s)\,\mathrm{d}s(s)=0\,,\;x\in\Gamma_{1}$ with the Jacobian given by $J(s)=\|\partial_{s}m_{j}(s)\|$. In most cases, we can explicitly write down this map. However, we approximate each $m_{j}(s)$ by a quadratic interpolation polynomial $m_{j}(s)\approx\widetilde{m}_{j}(s)=\sum_{i=1}^{3}v_{(2i-j)\,\mathrm{mod}\,(2n_{f})}L_{i}(s)$ where the Lagrange basis functions are $L_{1}(s)=u\cdotp(1-2s)\,,\quad L_{2}(s)=4s\cdotp u\,,\quad\text{ and }\quad L_{3}(s)=s\cdotp(2s-1)$ with $u=1-s$. Here, $\gamma\,\mathrm{mod}\,\delta=\gamma-\lfloor\frac{\gamma}{\delta}\rfloor\cdotp\delta$ with $\lfloor\cdotp\rfloor$ the floor function. The nodes $v_{\ell}$ ($\ell=1,\ldots,2n_{f}$) are the given vertices and midpoints of the $n_{f}$ faces. We refer the reader to Figure 4 for an example with four faces and eight nodes (four vertices and midpoints, respectively). $v_{2}$$v_{8}$$v_{6}$$v_{4}$$v_{1}$$v_{7}$$v_{5}$$v_{3}$ Figure 4: The eight nodes $v_{1},\ldots,v_{8}$, the four vertices $v_{1}$, $v_{3}$, $v_{5}$, and $v_{7}$ (marked with $\times$), and the four midpoints $v_{2}$, $v_{4}$, $v_{6}$, and $v_{8}$ (marked with $\circ$) for the four faces $\Delta_{1}$, $\Delta_{2}$, $\Delta_{3}$, and $\Delta_{4}$. An example how the approximation of $\Delta_{1}$ via a quadratic interpolation polynomial using two vertices and the midpoint looks like is shown in Figure 5. $\frac{\pi}{4}$$0$$\frac{\pi}{2}$ Figure 5: The approximation of the first part of the boundary $\Delta_{1}$ (solid line) via a quadratic interpolation polynomial using two vertices and the midpoint (dashed line). Next, we define the ‘collocation nodes’ $\widetilde{v}_{j,k}$ by $\widetilde{v}_{j,k}=\widetilde{m}_{j}(q_{k})$ for $j=1,\ldots,n_{f}$ and for $k=1,2,3$ where $q_{1}=\alpha$, $q_{2}=1/2$, and $q_{3}=1-\alpha$ with $0<\alpha<1/2$ a given and fixed constant. This ensures that the collocation nodes are always lying within a piece of the boundary and at those points the interior solid angle is $1/2$. For a specific choice of $\alpha$ the overall convergence rate can be improved. The first three collocation nodes on the approximated boundary for the unit circle using $\alpha=(1-\sqrt{3/5})/2$ are shown in Figure 6. Figure 6: The first three collocation nodes (solid triangles) on the first part of the approximated boundary (dashed line) for the unit circle using $\alpha=(1-\sqrt{3/5})/2$. The exact boundary is shown with a solid line including the two vertices marked by a cross. The unknown function $u_{1}(\widetilde{m}_{j}(s))$ is now approximated on each of the $j$ pieces by a quadratic interpolation polynomial of the form $\sum_{k=1}^{3}u_{1}(\widetilde{m}_{j}(q_{k}))\widetilde{L}_{k}(s)$ which can be written as $\sum_{k=1}^{3}u_{1}(\widetilde{v}_{j,k})\widetilde{L}_{k}(s)$ where the Lagrange basis functions are given by $\displaystyle\widetilde{L}_{1}(s)=\frac{u-\alpha}{1-2\alpha}\frac{1-2s}{1-2\alpha}\,,\;\widetilde{L}_{2}(s)=4\frac{s-\alpha}{1-2\alpha}\frac{u-\alpha}{1-2\alpha}\,,\;\widetilde{L}_{3}(s)=\frac{s-\alpha}{1-2\alpha}\frac{2s-1}{1-2\alpha}$ with $u=1-s$. We obtain $\displaystyle\Omega_{1}(x)u_{1}(x)$ $\displaystyle+$ $\displaystyle\sum_{j=1}^{n_{f}}\sum_{k=1}^{3}\int_{\sigma}\partial_{\nu_{1}(\widetilde{m}_{j}(s))}\Phi_{k}(x,\widetilde{m}_{j}(s))\|\partial_{s}\widetilde{m}_{j}(s)\|\widetilde{L}_{k}(s)\,\mathrm{d}s(s)u_{1}(\widetilde{v}_{j,k})=r(x)$ with $r(x)$ the residue which is due to the different approximations. We set $r(\widetilde{v}_{i,\ell})=0$ and since $\Omega_{1}(\widetilde{v}_{i,\ell})=1/2$ always by the choice of the collocation nodes, we obtain the linear system of size $3n_{f}\times 3n_{f}$ $\displaystyle\frac{1}{2}u_{1}(\widetilde{v}_{i,\ell})+\sum_{j=1}^{n_{f}}\sum_{k=1}^{3}a_{i,\ell,j,k}u_{1}(\widetilde{v}_{j,k})=0$ with the resulting integrals $\displaystyle a_{i,\ell,j,k}=\int_{\sigma}\partial_{\nu_{1}(\widetilde{m}_{j}(s))}\Phi_{k}(\widetilde{v}_{i,\ell},\widetilde{m}_{j}(s))\|\partial_{s}\widetilde{m}_{j}(s)\|\widetilde{L}_{k}(s)\,\mathrm{d}s(s)$ (9) which will be approximated by the adaptive Gauss-Kronrod quadrature (see [36]). This can be written abstractly as $\mathbf{M}(k)\vec{u}=\vec{0}$. Note that the integrand of the integral of the right-hand side (9) can easily by written down as $\frac{\mathrm{i}kH_{1}^{(1)}(kr)}{4r}\left(a\cdotp n_{1}+b\cdotp n_{2}\right)\widetilde{L}_{k}(s)$ with $H_{1}^{(1)}$ the first-kind Hankel function of order one and with $\displaystyle a$ $\displaystyle=$ $\displaystyle\left[\widetilde{v}_{i,\ell}-\widetilde{m}_{j}(s)\right]_{1}\,,\quad b=\left[\widetilde{v}_{i,\ell}-\widetilde{m}_{j}(s)\right]_{2}\,,\quad r=\sqrt{a^{2}+b^{2}}\,,$ $\displaystyle n_{1}$ $\displaystyle=$ $\displaystyle\left[\partial_{s}\widetilde{m}_{j}(s)\right]_{2}\,,\quad n_{2}=-\left[\partial_{s}\widetilde{m}_{j}(s)\right]_{1}\,.$ Note that the Jacobian cancels out. A word has to be spent on the following issue: When $P\neq Q$ with $P=\widetilde{v}_{i,\ell}$ and $Q=\widetilde{m}_{j}(s)$, then the integrand of the integral of the right-hand side (9) is smooth. However, for the case $P=Q$ a singularity within the integral of the right-hand side (9) is present. In this case, we can use the singularity subtraction method to rewrite the singular integral in the following form $\displaystyle\int_{\sigma}\partial_{\nu_{1}(\widetilde{m}_{j}(s))}\Phi_{k}(\widetilde{v}_{i,\ell},\widetilde{m}_{j}(s))\|\partial_{s}\widetilde{m}_{j}(s)\|\widetilde{L}_{k}(s)\,\mathrm{d}s(s)$ $\displaystyle=$ $\displaystyle\int_{\sigma}\partial_{\nu_{1}(\widetilde{m}_{j}(s))}\left(\Phi_{k}(\widetilde{v}_{i,\ell},\widetilde{m}_{j}(s))-\Phi_{0}(\widetilde{v}_{i,\ell},\widetilde{m}_{j}(s))\right)\|\partial_{s}\widetilde{m}_{j}(s)\|\widetilde{L}_{k}(s)\,\mathrm{d}s(s)$ $\displaystyle+$ $\displaystyle\int_{\sigma}\partial_{\nu_{1}(\widetilde{m}_{j}(s))}\Phi_{0}(\widetilde{v}_{i,\ell},\widetilde{m}_{j}(s))\|\partial_{s}\widetilde{m}_{j}(s)\|\widetilde{L}_{k}(s)\,\mathrm{d}s(s)=I_{i,\ell}^{\text{smooth}}+I_{i,\ell}^{\text{sing}}$ where $\Phi_{0}(P,Q)=-\log(|P-Q|)/(2\pi)$ is the fundamental solution of the Laplace equation. The integral $I_{i,\ell}^{\text{smooth}}$ has a smooth kernel (no singularity present) and is converging rapidly to zero when increasing the number of faces (independent of the wave number $k$). Hence, we directly set $I_{i,\ell}^{\text{smooth}}=0$. The integral $I_{i,\ell}^{\text{sing}}$ (a singularity is present) can be rewritten as a sum of integrals without any singularity. This is due to the fact that we have $\mathrm{D}_{0}^{\Gamma_{1}\rightarrow\Gamma_{1}}\,1\,(x)=-\Omega_{1}(x)$ mit $\psi=1$, $\forall x\in\Gamma_{1}$ (see [35, p. 363]) and hence, we approximately use for all $i,\ell$: $\sum_{j=1}^{n_{f}}\sum_{k=1}^{3}\int_{\sigma}\partial_{\nu_{1}(\widetilde{m}_{j}(s))}\Phi_{0}(\widetilde{v}_{i,\ell},\widetilde{m}_{j}(s))\|\partial_{s}\widetilde{m}_{j}(s)\|\widetilde{L}_{k}(s)\,\mathrm{d}s(s)\approx-\Omega_{1}(\widetilde{v}_{i,\ell})=-\frac{1}{2}$ (that is, the row sum of the matrix $\mathbf{M}(0)$ obtained from the discretization of the double layer for the Laplace equation shall be $-1/2$) and hence, we can compute $I_{i,\ell}^{\text{sing}}$ as $\displaystyle I_{i,\ell}^{\text{sing}}$ $\displaystyle\approx$ $\displaystyle-\underbrace{\Omega_{1}(\widetilde{v}_{i,\ell})}_{\frac{1}{2}}$ (10) $\displaystyle-\mathop{\sum_{j=1}^{n_{f}}}_{j\neq i}\mathop{\sum_{k=1}^{3}}_{k\neq\ell}\int_{\sigma}\partial_{\nu_{1}(\widetilde{m}_{j}(s))}\Phi_{0}(\widetilde{v}_{i,\ell},\widetilde{m}_{j}(s))\|\partial_{s}\widetilde{m}_{j}(s)\|\widetilde{L}_{k}(s)\,\mathrm{d}s(s)\,.$ Each of the integrands within the integrals of the right-hand side (10) are smooth and can be computed with the previously mentioned Gauss-Kronrod quadrature. Note that we never have to compute the interior solid angle since it is always $1/2$ by the choice of the collocation points. In fact, we never have to use the value $1/2$ since it cancels out with the $1/2$ within the definition of $\mathbf{M}(k)$. Finally, if we use a domain with a hole, we obtain the system of size $m\times m=3\cdotp 2\cdotp n_{f}\times 3\cdotp 2\cdotp n_{f}$ when including the boundary $\Gamma_{2}$ and the unknown function $u_{2}$. Written abstractly we obtain the non-linear eigenvalue problem $\displaystyle\mathbf{M}(k)\vec{u}=\vec{0}$ (11) Of course, the extension to more than one hole is obvious. In this case, the matrix $\mathbf{M}(k)$ within (11) will be of size $(q-1)\cdotp 2\cdotp n_{f}\times(q-1)\cdotp 2\cdotp n_{f}$ where $q$ denotes the number of holes within the domain. ### 2.4 Non-linear eigenvalue problem The non-linear eigenvalue problem (11) is solved with the Beyn algorithm [8]. It is based on complex-valued contour integrals integrating over the resolvent reducing the non-linear eigenvalue to a linear one (of very small size) which can be achieved by Keldysh’s theorem. Precisely, a user-specified $2\pi$-periodic contour $\gamma$ of class $C^{1}$ within the complex plane has to be given. We need a contour that is enclosing a part of the real line where the smallest non-zero eigenvalue is expected. We usually use a circle with radius $R$ and center $(\mu,0)$ (in order to exclude the eigenvalue zero, we choose $\mu>R$). In this case, we have $\varphi(t)=\mu+R\cos(t)+\mathrm{i}R\sin(t)$ which satisfies $\varphi\in C^{\infty}$. The number of eigenvalues including their multiplicity within the contour $\gamma$ is denoted by $n(\gamma)$. With the randomly chosen matrix $\hat{\mathbf{V}}\in\mathbb{C}^{m\times\ell}$ with $m\gg\ell\geq n(\gamma)$ the two contour integrals of the form $\displaystyle\mathbf{A}_{0}$ $\displaystyle=$ $\displaystyle\frac{1}{2\pi\mathrm{i}}\int_{\gamma}\mathbf{M}^{-1}(k)\hat{\mathbf{V}}\,\mathrm{d}s(k)\,,$ $\displaystyle\mathbf{A}_{1}$ $\displaystyle=$ $\displaystyle\frac{1}{2\pi\mathrm{i}}\int_{\gamma}k\mathbf{M}^{-1}(k)\hat{\mathbf{V}}\,\mathrm{d}s(k)$ over the given contour $\gamma$ are now rewritten as $\displaystyle\mathbf{A}_{0}$ $\displaystyle=$ $\displaystyle\frac{1}{2\pi\mathrm{i}}\int_{0}^{2\pi}\mathbf{M}^{-1}(\varphi(t))\hat{\mathbf{V}}\varphi^{\prime}(t)\,\mathrm{d}s(t)\,,$ $\displaystyle\mathbf{A}_{1}$ $\displaystyle=$ $\displaystyle\frac{1}{2\pi\mathrm{i}}\int_{0}^{2\pi}\varphi(t)\mathbf{M}^{-1}(\varphi(t))\hat{\mathbf{V}}\varphi^{\prime}(t)\,\mathrm{d}s(t)$ and approximated by the trapezoidal rule yielding $\displaystyle\mathbf{A}_{0,N}$ $\displaystyle=$ $\displaystyle\frac{1}{\mathrm{i}N}\sum_{j=0}^{N-1}\mathbf{M}^{-1}(\varphi(t_{j}))\hat{\mathbf{V}}\varphi^{\prime}(t_{j}),$ $\displaystyle\mathbf{A}_{1,N}$ $\displaystyle=$ $\displaystyle\frac{1}{\mathrm{i}N}\sum_{j=0}^{N-1}\varphi(t_{j})\mathbf{M}^{-1}(\varphi(t_{j}))\hat{\mathbf{V}}\varphi^{\prime}(t_{j})\,,$ where the parameter $N$ is given and the equidistant nodes are $t_{j}=2\pi j/N$, $j=0,\ldots,N$. Note that the choice $N=24$ is usually sufficient, which is due to the exponential convergence rate ([8, Theorem 4.7]). The next step is the computation of a singular value decomposition of $\mathbf{A}_{0,N}=\mathbf{V}\mathbf{\Sigma}\mathbf{W}^{\mathrm{H}}$ with $\mathbf{V}\in\mathbb{C}^{m\times\ell}$, $\mathbf{\Sigma}\in\mathbb{C}^{\ell\times\ell}$, and $\mathbf{W}\in\mathbb{C}^{\ell\times\ell}$. Then, we perform a rank test for the matrix $\mathbf{\Sigma}=\mathrm{diag}(\sigma_{1},\sigma_{2},\ldots,\sigma_{\ell})$ for a given tolerance $\epsilon=\text{tol}_{\text{rank}}$ (usually $\epsilon=10^{-4}$). That is, find $n(\gamma)$ such that $\sigma_{1}\geq\ldots\geq\sigma_{n(\gamma)}>\epsilon>\sigma_{n(\gamma)+1}\geq\ldots\geq\sigma_{\ell}$. Define $\mathbf{V}_{0}=(\mathbf{V}_{ij})_{1\leq i\leq m,1\leq j\leq n(\gamma)}$, $\mathbf{\Sigma}_{0}=(\mathbf{\Sigma}_{ij})_{1\leq i\leq n(\gamma),1\leq j\leq n(\gamma)}$, and $\mathbf{W}_{0}=(\mathbf{W}_{ij})_{1\leq i\leq\ell,1\leq j\leq n(\gamma))}$ and compute the $n(\gamma)$ eigenvalues $k_{i}$ and eigenvectors $\vec{s}_{i}$ of the matrix $\mathbf{B}=\mathbf{V}_{0}^{\mathrm{H}}\mathbf{A}_{1,N}\mathbf{W}_{0}\mathbf{\Sigma}_{0}^{-1}\in\mathbb{C}^{n(\gamma)\times n(\gamma)}$. The $i$-th non-linear eigenvector $\vec{u}_{i}$ is given by $\mathbf{V}_{0}\vec{s}_{i}$. We refer the reader to [8, p. 3849] for more details on the implementation of this algorithm and the detailed analysis behind it including the proof of exponential convergence. ### 2.5 Eigenfunction After we obtain the smallest non-zero eigenvalue $k$ and the corresponding function $u$ on the boundary from (11), we insert this into (3) to compute the eigenfunction inside the domain at any point we want. The discretization of the integrals is done as explained previously. Precisely, we have $\displaystyle u(x)=-\mathrm{DL}_{k}^{\Gamma_{1}}u_{1}(x)-\mathrm{DL}_{k}^{\Gamma_{2}}u_{2}(x)\approx\sum_{j=1}^{n_{f}}\sum_{k=1}^{3}\left(\hat{a}_{j,k}u_{1}(\widetilde{v}_{j,k})+\hat{b}_{j,k}u_{2}(\widetilde{v}_{j,k})\right)$ with $\displaystyle\hat{a}_{j,k}$ $\displaystyle=$ $\displaystyle\int_{\sigma}\partial_{\nu_{1}(\widetilde{m}_{j}(s))}\Phi_{k}(x,\widetilde{m}_{j}(s))\|\partial_{s}\widetilde{m}_{j}(s)\|\widetilde{L}_{k}(s)\,\mathrm{d}s(s)$ $\displaystyle\hat{b}_{j,k}$ $\displaystyle=$ $\displaystyle\int_{\sigma}\partial_{\nu_{2}(\widetilde{m}_{j}(s))}\Phi_{k}(x,\widetilde{m}_{j}(s))\|\partial_{s}\widetilde{m}_{j}(s)\|\widetilde{L}_{k}(s)\,\mathrm{d}s(s)$ for an arbitrary point $x\in D$. In fact, we can find maximal or minimal values by maximizing or minimizing this function. This is done by the Nelder- Mead algorithm (in Matlab by the fminsearch function), refer also to [26]. ### 2.6 Superconvergence The convergence of the method is out of the scope of this paper. Standard convergence results are available for boundary integral equations of the second kind using boundary element collocation method under suitable assumptions on the boundary (for example the boundary is at least of class $C^{2}$) and the boundary condition for the Laplace equation (see [5]). Quadratic approximations of the boundary and the boundary function yield cubic convergence (refer to [5] for the Laplace equation and [22] for the Helmholtz equation). In fact, the convergence results can be improved as shown in [22] for the three-dimensional case. However, the exact theoretical convergence rate for the eigenvalue is not known. It is expected that it is at least of order three, but we see later in the numerical results that it is better than three (for a sufficiently smooth boundary). Future work in this direction could be done using the ideas of Steinbach & Unger [38]. Finally, note that a specific choice of $0<\alpha<1/2$ can improve the overall convergence rate for smooth boundaries (see [22]), but since we are happy with the cubic convergence rate with the pick $\alpha=(1-\sqrt{3/5})/2$ (a Gauss- quadrature point within the interval $[0,1]$), we have not investigated this any further. ## 3 Numerical results ### 3.1 Simply-connected convex domains First, we check the correctness and the convergence of the underlying method for the unit circle. It is known that the first non-trivial interior Neumann eigenvalue is the smallest positive root of $J_{1}^{\prime}$ (the first derivative of the first kind Bessel function of order one). The root can be computed to arbitrary precision with Maple with the command restart; Digits:=16: fsolve(diff(BesselJ(1,x),x),x=1..2); It is approximately given by $1.841\,183\,781\,340\,659\;.$ (12) For the Beyn algorithm we use the parameters $N=24$, $R=1/2$, $\mu=2$, and $\ell=10$ for various number of faces $n_{f}$ and number of collocation points $n_{c}$. With the definition of the absolute error $E_{n_{f}}^{(i)}$ of the $i$-th eigenvalue approximation, we compute the error of the first non-trivial eigenvalue $E_{n_{f}}^{(1)}$ of our method compared with (12). Additionally, we define the estimated error of convergence $\mathrm{EOC}^{(i)}=\log(E_{n_{f}}^{(i)}/E_{2\cdotp n_{f}}^{(i)})/\log(2)$ of the $i$-th eigenvalue approximation and compute $\mathrm{EOC}^{(1)}$. As we can see in Table 1, the first non-trivial Neumann eigenvalue can be made accurate up to eleven digits. Table 1: Absolute error and estimated order of convergence of the first non- trivial interior Neumann eigenvalue for a unit circle using different number of faces and collocation points. * $n_{f}$ | $n_{c}$ | abs. error $E_{n_{f}}^{(1)}$ | $\mathrm{EOC}^{(1)}$ ---|---|---|--- 5 | 15 | $5.8503_{-3}$ | 10 | 30 | $4.7818_{-4}$ | 3.6129 20 | 60 | $4.5775_{-5}$ | 3.3849 40 | 120 | $5.0168_{-6}$ | 3.1897 80 | 240 | $5.9173_{-7}$ | 3.0838 160 | 480 | $7.2096_{-8}$ | 3.0369 320 | 960 | $8.9069_{-9}$ | 3.0169 640 | 1920 | $1.1072_{-9}$ | 3.0080 1280 | 3840 | $1.3803_{-10}$ | 3.0039 The estimated order of convergence is at least of order three. Note that we can go beyond eleven digits accuracy by further increasing $n_{f}$, but it is not necessary here. Next, we show the influence of the algebraic multiplicity of the eigenvalue. The first non-trivial interior Neumann eigenvalue for the unit circle has algebraic multiplicity two. The same is true for the second non-trivial interior Neumann eigenvalue. It is obtained by computing the first positive root of $J^{\prime}_{2}$ approximately given by $3.054\,236\,928\,227\,140$. The third non-trivial interior Neumann eigenvalue is simple and obtained by computing the second root of $J_{0}^{\prime}$ given by $3.831\,705\,970\,207\,512$. Note that the first root of $J_{0}^{\prime}$ is zero which corresponds to the interior Neumann eigenvalue zero with a corresponding eigenfunction which is a constant. In Table 2, we show the absolute error and the estimated order of convergence for the second and third non-zero interior Neumann eigenvalue for a unit circle where we used different number of faces and different number of collocation points using the parameters $N=24$, $R=1/2$, $\mu=3$, and $\ell=10$. Table 2: Absolute error and estimated order of convergence of the second and third non-trivial interior Neumann eigenvalue for a unit circle using different number of faces and collocation points. * $n_{f}$ | $n_{c}$ | abs. error $E_{n_{f}}^{(2)}$ | $\mathrm{EOC}^{(2)}$ | abs. error $E_{n_{f}}^{(3)}$ | $\mathrm{EOC}^{(3)}$ ---|---|---|---|---|--- 5 | 15 | $1.2817_{-2}$ | | $1.6335_{-2}$ | 10 | 30 | $1.1543_{-3}$ | 3.4729 | $1.3081_{-3}$ | 3.6424 20 | 60 | $1.2187_{-4}$ | 3.2437 | $1.3133_{-4}$ | 3.3162 40 | 120 | $1.4173_{-5}$ | 3.1041 | $1.5147_{-5}$ | 3.1161 80 | 240 | $1.7199_{-6}$ | 3.0428 | $1.8416_{-6}$ | 3.0401 160 | 480 | $2.1229_{-7}$ | 3.0182 | $2.2788_{-7}$ | 3.0146 320 | 960 | $2.6386_{-8}$ | 3.0082 | $2.8373_{-8}$ | 3.0057 640 | 1920 | $3.2895_{-9}$ | 3.0038 | $3.5406_{-9}$ | 3.0024 1280 | 3840 | $4.1066_{-10}$ | 3.0018 | $4.4225_{-10}$ | 3.0011 Again, we notice the estimated order of convergence of at least order three and that we can achieve the order and the high accuracy regardless of the algebraic multiplicity of the eigenvalue. We define $\square_{\kappa}=[-\kappa,\kappa]\times[-\kappa,\kappa]$ with $\kappa\in\mathbb{R}_{>0}$. In general, we use a resolution of $100\times 100$ equidistantly distributed points, here within $\square_{1.1}$ and compute for each point that is located inside the unit circle the value of the eigenfunction. In Figure 7, we show the eigenfunctions corresponding to the first three non-trivial interior Neumann eigenvalues as a contour plot with 40 contour lines. We also include the location of the maximum and minimum of the eigenfunction that corresponds to the first non-trivial interior Neumann eigenvalue as a red and blue dot, respectively. (a) First eigenfunction of a unit circle (b) Second eigenfunction of a unit circle (c) Third eigenfunction of a unit circle Figure 7: The first three eigenfunctions corresponding to the first three non-trivial interior Neumann eigenvalues $1.841\,184$, $3.054\,237$, and $3.831\,706$ for the unit circle. We can see that the extreme values for the first non-trivial interior Neumann eigenfunction of the unit circle are obtained on the boundary as it is conjectured for simply-connected convex domains. Note that the second eigenfunction corresponding to the first non-trivial interior Neumann eigenvalue is a rotated version of the first eigenfunction. We also show the eigenfunctions including the maximal and minimal value for a variety of other simply-connected convex domains in Figure 8 such as an ellipse and two deformed ellipses. (a) First eigenfunction of an ellipse (b) First eigenfunction of a deformed ellipse with $\varepsilon=0.1$ (c) First eigenfunction of a deformed ellipse with $\varepsilon=0.2$ Figure 8: The eigenfunctions corresponding to the first non-trivial interior Neumann eigenvalues for the ellipse and two deformed ellipses with $\varepsilon=0.1$ and $\varepsilon=0.2$ with corresponding non-trivial interior Neumann eigenvalue $1.544\,422$, $1.849\,064$, and $1.819\,478$. The boundary of the ellipse is given in parametric form as $(6\cos(t)/5,\sin(t))^{\top}$ with $t\in[0,2\pi)$. We use the parameters as before for Beyn’s algorithm except $\mu=1.5$ and consider $\square_{1.3}$. The first non-trivial interior Neumann eigenvalue is given by $1.544\,422$ which has algebraic multiplicity one. The parametrization of the deformed ellipse’s boundary is given by $(0.75\cos(t)+\varepsilon\cos(2t),\sin(t))^{\top}$ with $t\in[0,2\pi)$, where the parameter $\varepsilon$ is chosen to be $0.1$ and $0.2$ (see [11, 24] for its first and second use). Using $n_{f}=320$, the first non-trivial interior Neumann eigenvalue of the deformed ellipses with $\varepsilon=0.1$ and $\varepsilon=0.2$ are $1.849\,064$ and $1.819\,478$, respectively. Again, they both have algebraic multiplicity one. It is generally believed that the hot spots conjecture for general simply-connected convex domains is true, but a general proof is still open. In all our numerical results for simply-connected convex domains, we obtain the extrema on the boundary as one can see in Figure 8. The same is true when we consider piecewise smooth convex domains such as the unit square and the equilateral triangle with side length one. The first non- trivial interior Neumann eigenvalues are known to be $\pi$ and $4\pi/3$ (see [15]). Their multiplicity is two. In Table 3 we see that our algorithm works fine with these two piecewise smooth domains. The estimated order of convergence is better than four and hence better than for the previously discussed smooth domains. Since we use quadratic interpolation of the smooth boundary, there is an approximation error limiting the convergence rate. For the considered piecewise smooth domains, the boundary is approximated exactly since it is a linear function thus explaining the better convergence. Table 3: Absolute error and estimated order of convergence of the first non- trivial interior Neumann eigenvalue for a unit square and an equilateral triangle with side length one using different number of faces and collocation points. * $n_{f}$ | $n_{c}$ | abs. error $E_{n_{f}}^{\square}$ | $\mathrm{EOC}^{\square}$ | $n_{f}$ | $n_{c}$ | abs. error $E_{n_{f}}^{\Delta}$ | $\mathrm{EOC}^{\Delta}$ ---|---|---|---|---|---|---|--- 4 | 12 | $7.3586_{-3}$ | | 3 | 9 | $2.7096_{-2}$ | 8 | 24 | $4.4226_{-4}$ | 4.0565 | 6 | 18 | $2.0723_{-3}$ | 3.7088 16 | 48 | $1.9060_{-5}$ | 4.5363 | 12 | 36 | $9.7843_{-5}$ | 4.4046 32 | 96 | $7.6974_{-7}$ | 4.6301 | 24 | 72 | $4.1556_{-6}$ | 4.5573 64 | 192 | $3.0527_{-8}$ | 4.6562 | 48 | 144 | $1.7064_{-7}$ | 4.6060 128 | 384 | $1.2042_{-9}$ | 4.6639 | 96 | 288 | $6.9001_{-8}$ | 4.6282 256 | 768 | $4.7429_{-11}$ | 4.6662 | 192 | 576 | $2.7581_{-10}$ | 4.6449 512 | 1536 | $1.8665_{-12}$ | 4.6674 | 384 | 1152 | $1.0880_{-11}$ | 4.6640 Later, we see that these nice convergence rates depend on the regularity of the solution at a corner and we obtain worse approximation results. In Figure 9 we show one of the corresponding eigenfunctions for the unit square (refer also to Figure 1) and the equilateral triangle with side length one including the location of the maximum and minimum. As we can see, they are located on the boundary. (a) First eigenfunction of a unit square (b) First eigenfunction of an equilateral triangle with side length one Figure 9: One of the eigenfunctions corresponding to the first non-trivial interior Neumann eigenvalues for the unit square and the equilateral triangle with side length one with corresponding non-trivial interior Neumann eigenvalue $\pi$ and $4\pi/3$. ### 3.2 Simply-connected non-convex domains No simply-connected non-convex domain has yet been found (neither theoretically nor numerically) that fails the hot spots conjecture. Now, we concentrate on this case. We consider the deformed ellipse from the previous section with $\varepsilon=0.3$, the peanut-shaped domain and the apple-shaped domain. The boundaries of the last two domains are given parametrically as $\sqrt{\cos^{2}(t)+\sin^{2}(t)/4}\left(\cos(t),\sin(t)\right)^{\top}$ and $\frac{0.5+0.4\cos(t)+0.1\sin(2t)}{1+0.7\cos(t)}\left(\cos(t),\sin(t)\right)^{\top}$ with $t\in[0,2\pi)$, respectively (see [41] for its use). We use the parameters as before with $\mu=1.5$ for the first two domains and $\mu=3$ for the apple-shaped domain. Using $n_{f}=320$, the first non-trivial interior Neumann eigenvalue for the three domains are $1.770\,906$, $1.721\,292$, and $2.761\,274$, respectively. They are all simple. As we can see in Figure 10, the maximum and minimum are obtained on the boundary of the domains using $\square_{1.3}$. (a) First eigenfunction of a deformed ellipse with $\varepsilon=0.3$ (b) First eigenfunction of a peanut (c) First eigenfunction of an apple Figure 10: The eigenfunctions corresponding to the first non-trivial interior Neumann eigenvalues for the deformed ellipse with $\varepsilon=0.3$, the peanut-shaped, and the apple-shaped domain with corresponding non-trivial interior Neumann eigenvalue $1.770\,906$, $1.721\,292$, and $2.761\,274$. Interesting domains have been constructed by Kleefeld (refer to [21] for more details) and extended by Abele and Kleefeld [1] for the purpose of finding new shape optimizers for certain non-trivial Neumann eigenvalues. The boundaries of the domains considered in those articles are given by ‘generalized’ equipotentials which are implicit curves. The simplest equipotential is of the form $\sum_{i=1}^{m}\|x-P_{i}\|^{-1}=c$ where the points $P_{i}\in\mathbb{R}^{2}$, $i=1,\ldots,m$, the number of points $m$, and the parameter $c$ are given. All $x\in\mathbb{R}^{2}$ that satisfy the equation describe the boundary of the domain. We use this idea to construct three non- symmetric and non-convex simply-connected domains, say $D_{1}$, $D_{2}$, and $D_{3}$. For the boundary of the domain $D_{1}$, we use the parameter $m=3$ and $c=14/5$. The three points are $(-1,1/2)^{\top}$, $(1,1/3)^{\top}$, and $(0,4/5)^{\top}$. Using $\mu=1$, we obtain the first non-trivial interior Neumann eigenvalue $1.051\,055$. For the plot of the eigenfunction, we use $\square_{1.6}$. The boundary of the second domain is constructed through the use of $m=4$ and $c=7/2$ and the points are $(-5/4,1/10)^{\top}$, $(5/4,0)^{\top}$, $(1/10,-1)^{\top}$, and $(0,1)^{\top}$. We obtain the first non-trivial interior Neumann eigenvalue $1.086\,037$ when using $\mu=1$ and the eigenfunction using $\square_{1.8}$. The third domain’s boundary is constructed using $m=5$ and $c=18/5$ and the points are $(-1,1/2)^{\top}$, $(1,1/2)^{\top}$, $(0,-1)^{\top}$, $(3/2,-1)^{\top}$, and $(-3/2,-6/5)^{\top}$. Using $\mu=1$, we obtain the first non-trivial interior Neumann eigenvalue $0.861\,858$. A plot of the corresponding eigenfunction is shown within $\square_{2.1}$. All three eigenfunctions including the location of the maximal and minimal value are shown in Figure 11. (a) First eigenfunction of $D_{1}$ (b) First eigenfunction of $D_{2}$ (c) First eigenfunction of $D_{3}$ Figure 11: The eigenfunctions corresponding to the first non-trivial interior Neumann eigenvalues for domains $D_{1}$, $D_{2}$, and $D_{3}$ with corresponding non-trivial interior Neumann eigenvalue $1.051\,055$, $1.086\,037$, and $0.861\,858$. As we can see again, the extreme values are obtained on the boundary. The same is true for the L-shaped domain that is given by $\llcorner=[0,1]^{2}-[0.5,1]^{2}$. We obtain the first non-trivial interior Neumann eigenvalue $2.429\,474$ and compare it with the well-known value (see [14] for the approximation $2\sqrt{1.475\,621\,845}\approx 2.429\,504$). As we can see in Table 4 the absolute error decreases dramatically since we have less regularity of the solution at the corner. We are only able to achieve four digits accuracy with a convergence rate that seems to be $4/3$. Table 4: Absolute error and estimated order of convergence of the first non- trivial interior Neumann eigenvalue for an L-shaped domain using different number of faces and collocation points. * $n_{f}$ | $n_{c}$ | abs. error $E_{n_{f}}^{\llcorner}$ | $\mathrm{EOC}^{\llcorner}$ ---|---|---|--- 6 | 18 | $7.0712_{-3}$ | 12 | 36 | $3.1014_{-3}$ | 1.1891 24 | 72 | $1.1976_{-3}$ | 1.3728 48 | 144 | $4.7500_{-4}$ | 1.3341 96 | 288 | $1.8860_{-4}$ | 1.3326 192 | 576 | $7.4872_{-5}$ | 1.3329 384 | 1152 | $2.9724_{-5}$ | 1.3327 In Figure 12 we show the corresponding eigenfunction. (a) First eigenfunction of $\llcorner$ (b) First eigenfunction of $\llcorner_{2}$ (c) First eigenfunction of $\llcorner_{3}$ Figure 12: The eigenfunction corresponding to the first non-trivial interior Neumann eigenvalues for the L-shaped domain and its two variants with corresponding non-trivial interior Neumann eigenvalue $2.429\,474$, $2.725\,559$, and $2.207\,276$, respectively. We also tried two different domains $\llcorner_{2}$ and $\llcorner_{3}$ as shown in Figure 4 b) and c). Using the same parameters as for the L-shaped domain $\llcorner$, we obtain the first non-trivial Neumann eigenvalues $2.725\,559$ and $2.207\,276$, respectively. Interestingly, the approximate convergence rates are $2.8432$, $2.0744$, $1,6656$, $1.6411$, $1.7140$, and $2.0115$ for $\llcorner_{2}$ and $1.0716$, $1.1645$, $1.1745$, $1.2202$, $1.3329$, and $1.6816$ for $\llcorner_{3}$. That is why we concentrate on smooth boundaries. In sum, we are not able to construct a simply-connected non-convex domain that fails the hot spots conjecture. Next, we concentrate on non-simply-connected domains. ### 3.3 Non-simply-connected domains Now, we consider an annulus with inner radius $R_{1}=1/2$ and outer radius $R_{2}=2$. For this domain, we can again compute a reference solution to arbitrary precision. The non-trivial interior Neumann eigenvalues are obtained through the roots of $J_{n}^{\prime}(R_{1}x)Y_{n}^{\prime}(R_{2}x)-J_{n}^{\prime}(R_{2}x)Y_{n}^{\prime}(R_{1}x)=0\,,\quad n=0,1,2,\ldots$ where $Y_{n}^{\prime}$ denotes the derivative of the second kind Bessel function of order $n$. (see for example [40, Equation 4.16]). The first two roots are obtained with the Maple command restart; Digits:=16: Jp:=unapply(diff(BesselJ(1,x),x),x): Yp:=unapply(diff(BesselY(1,x),x),x): fsolve(Jp(x/2)*Yp(2*x)-Jp(2*x)*Yp(x/2),x=1); Jp:=unapply(diff(BesselJ(2,x),x),x): Yp:=unapply(diff(BesselY(2,x),x),x): fsolve(Jp(x/2)*Yp(2*x)-Jp(2*x)*Yp(x/2),x=1); The two smallest roots are approximately given by $0.822\,252\,688\,623\,884\quad\text{and}\quad 1.504\,647\,782\,189\,479\,,$ respectively. They both have multiplicity two. Again, we show in Table 5 that our method is able to achieve ten digits accuracy with a cubic convergence order for the first two non-trivial interior Neumann eigenvalues for an annulus using the parameter $\mu=6/5$ for various number of faces. Note that twice the number of faces is needed due to the need of two boundary curves. Table 5: Absolute error and estimated order of convergence of the first and second non-trivial interior Neumann eigenvalue for an annulus with $R_{1}=1/2$ and $R_{2}=2$ using different number of faces and collocation points. * $2n_{f}$ | $2n_{c}$ | absolute error $E_{n_{f}}^{(1)}$ | $\mathrm{EOC}^{(1)}$ | absolute error $E_{n_{f}}^{(2)}$ | $\mathrm{EOC}^{(2)}$ ---|---|---|---|---|--- 10 | 30 | $2.4700_{-3}$ | | $6.1268_{-3}$ | 20 | 60 | $1.9490_{-4}$ | 3.6637 | $5.4708_{-4}$ | 3.4853 40 | 120 | $1.7741_{-5}$ | 3.4575 | $5.7347_{-5}$ | 3.2540 80 | 240 | $1.8725_{-6}$ | 3.2441 | $6.6443_{-6}$ | 3.1095 160 | 480 | $2.1655_{-7}$ | 3.1122 | $8.0539_{-7}$ | 3.0443 320 | 960 | $2.6159_{-8}$ | 3.0493 | $9.9900_{-8}$ | 3.0111 640 | 1920 | $3.2351_{-9}$ | 3.0154 | $1.2988_{-8}$ | 2.9433 In Figure 13 we show the first three eigenfunctions for $\square_{2.1}$ corresponding to the non-trivial interior Neumann eigenvalues $0.822\,253$, $1.504\,648$, and $2.096\,773$. To compute the third eigenvalue which has multiplicity two, we used $\mu=2$. For the first eigenfunction plot, we also added the extreme values. (a) First eigenfunction of an annulus (b) Second eigenfunction of an annulus (c) Third eigenfunction of an annulus Figure 13: The first three eigenfunctions corresponding to the first three non-trivial interior Neumann eigenvalues $0.822\,253$, $1.504\,648$, and $2.096\,773$ for an annulus. We can see that the extreme values are again obtained on the boundary of the annulus. Now, we consider more complex non-simply connected domains. The first domain $A_{1}$ is given by a unit circle centered at the origin removing an ellipse centered at $(-0.5,-0.3)^{\top}$ with semi-axis $0.15$ and $0.35$. We obtain the first non-trivial interior Neumann eigenvalue $1.662\,873$ and the eigenfunction within $\square_{1.1}$. The second domain $A_{2}$ is given by $D_{3}$ removing an ellipse centered at $(-0.1,-0.3)^{\top}$ with semi-axis $0.15$ and $0.35$. We get the first non-trivial interior Neumann eigenvalue $1.651\,571$ and the eigenfunction within $\square_{1.1}$. The third domain $A_{3}$ is given by $D_{3}$ removing a 90 degree counter-clockwise rotated version of $D_{3}$ scaled by $1/2$. We obtain the first non-trivial interior Neumann eigenvalue $1.171\,590$ and the eigenfunction within $\square_{1.1}$. We used the parameter $\mu=1.5$ for all three domains. The eigenfunctions for $A_{1}$, $A_{2}$, and $A_{3}$ including the extreme values are illustrated in Figure 14. (a) First eigenfunction of $A_{1}$ (b) First eigenfunction of $A_{2}$ (c) First eigenfunction of $A_{3}$ Figure 14: The eigenfunctions corresponding to the first non-trivial interior Neumann eigenvalues $1.662\,873$, $1.651\,571$, and $1.171\,590$ for the domains $A_{1}$, $A_{2}$, and $A_{3}$. As we can see again, the extreme values are obtained on the boundary. We also tried many other similar geometries with different kinds of holes and obtained similar results. The hot spots conjecture seems to hold. Finally, we concentrate on a more complex geometry inspired by the work of Burdzy [9, Figure 1]. He proved that there exists a bounded planar domain with one hole that fails the hot spots conjecture. However, the description of the proposed bounded domain with one hole is very technical and it is difficult to implement his domain to verify his theoretical result. His domain’s boundary has many corners which would complicate the explicit construction of the boundary. Additionally, the domains are very thin. Note that no numerical result supports his theoretical result. In fact, up-to-date no numerical results have been shown yet. We try to close this gap. We construct less complex domains that fail the hot spots conjecture and show it numerically. Further, we show the exact location of the extreme values. The ‘teether’ domain depicted in Figure 15 is constructed as follows: Figure 15: A more advance bounded domain with a hole inspired by the work of Burdzy [9, Figure 1]. Let $E(x,y,a,b,t_{1},t_{2})$ be the ellipse centered at $(x,y)^{\top}$ with semi-axis $a$ and $b$ constructed for $\phi\in[t_{1},t_{2})$ using the parametrization $(x+a\cos(\phi),y+b\sin(\phi))^{\top}$. Note that we also allow $t_{1}>t_{2}$ to guarantee the needed orientation of the curve. The first half of the outer boundary is given by the pieces $E(4,4,1,1,-\pi/2,-\pi)$, $E(2,4,1,1,0,\pi)$, $E(0,4,1,1/2,0,-\pi)$, $E(-2,4,1,1,0,\pi)$, $E(-4,4,1,1,0,-\pi/2)$, and $E(-4,0,3,3,\pi/2,3\pi/2)$. Rotating this half by $\pi$ yields the second half of the outer boundary. The first half of the inner boundary is given by the pieces $E(4,3/2,1,1,\pi/2,\pi)$, $E(2,3/2,1,1,0,-\pi)$, $E(0,3/2,1,1/2,0,\pi)$, $E(-2,3/2,1,1,0,-\pi)$, $E(-4,3/2,1,1,0,\pi/2)$, and $E(-4,0,5/2,5/2,\pi/2,3\pi/2)$. Rotating this half by $\pi$ yields the second half of the inner boundary. Next, the orientation of the inner boundary is reversed. Finally, all coordinates of the boundary are multiplied with $1/4$. This yields our ‘teether’ domain $C_{1}$. We use the parameter $\mu=0.8$, $\ell=20$ and $\square_{1.8}$. We obtain the first non-trivial interior Neumann eigenvalue $0.370\,708$. The corresponding eigenfunction including is extreme values is shown in Figure 16. Since the extreme values lie in very flat plateaus, we additionally show zoomed versions around the extreme values to better see that they are located inside the domain. (a) First eigenfunction of $C_{1}$ (b) Zoom around the maximum (c) Zoom around the minimum Figure 16: The eigenfunction and its zoomed version around the maximum and minimum corresponding to the first non-trivial interior Neumann eigenvalue $0.370\,708$ for the teether domain $C_{1}$. This shows that we are able to show numerically that there exists a bounded domain with one hole that fails the hot spots conjecture. Next, we investigate some possible conditions needed to construct an example that fails the hot spots conjecture. With one bump it was not possible to obtain the extreme values inside the domain, say $C_{2}$. We use the previous example, and remove one of the bump and its mirror version in the upper and lower part of the teether domain. We obtain the first non-trivial interior Neumann eigenvalue $0.534\,605$ and the corresponding eigenfunction within $\square_{1.6}$. (a) First eigenfunction of $C_{2}$ (b) Zoom around the maximum Figure 17: The eigenfunction and its zoomed version around the maximum corresponding to the first non-trivial interior Neumann eigenvalue $0.534\,605$ for the domain $C_{2}$. As we can see again in Figure 17, the values inside the bump area are very close to each other. The extreme values are attained on the boundary. Right now, the proposed domain $C_{1}$ has two lines of symmetry. Now, we break the symmetry and show that we are still able to obtain a counter-example of the hot spots conjecture. We add a small amount of $0.0250$ to the semi axis of the ellipse that describes the upper left bump and obtain the domain $C_{3}$. The first non-trivial interior Neumann eigenvalue is $0.367\,496$. (a) First eigenfunction of $C_{3}$ (b) First eigenfunction of $\widetilde{C}_{3}$ Figure 18: The eigenfunction corresponding to the first non-trivial interior Neumann eigenvalue $0.367\,496$ and $0.370\,054$ for the domains $C_{3}$ and $\widetilde{C}_{3}$. As we can see in Figure 18, the location of the hot spots of the eigenfunction within $\square_{1.8}$ are slightly changed, too, but they remain inside of $C_{3}$. Increasing the value from $0.0250$ to $0.1255$ will shift the maximal value from inside the domain $\widetilde{C}_{3}$ to the boundary while the minimum stays inside the domain. We obtain the eigenvalue $0.370\,054$. Now, we show what happens if we make the gap between $E(0,4,1,1/2,0,-\pi)$ and $E(0,3/2,1,1/2,0,\pi)$ and its mirror counterpart smaller. We use $E(0,4,1,1,0,-\pi)$ and $E(0,3/2,1,1,0,\pi)$ instead to obtain $C_{4}$. We obtain the first non-trivial interior Neumann eigenvalue $0.384\,715$ and its corresponding eigenfunction within $\square_{1.8}$ shown in Figure 19. (a) First eigenfunction of $C_{4}$ Figure 19: The eigenfunction corresponding to the first non-trivial interior Neumann eigenvalue $0.384\,715$ for the domains $C_{4}$. Next, we show what happens if we make the bumps of the domain $C_{4}$ smaller. We construct the domain as before except that we introduce a new parameter $\delta>0$. The first half of the outer boundary is given by the pieces $E(4,3+\delta,1,\delta,-\pi/2,-\pi)$, $E(2,3+\delta,1,\delta,0,\pi)$, $E(0,3+\delta,1,\delta,0,-\pi)$, $E(-2,3+\delta,1,\delta,0,\pi)$, $E(-4,3+\delta,1,\delta,0,-\pi/2)$, and $E(-4,0,3,3,\pi/2,3\pi/2)$. Rotating this half by $\pi$ yields the second half of the outer boundary. The first half of the inner boundary is given by the pieces $E(4,5/2-\delta,1,\delta,\pi/2,\pi)$, $E(2,5/2-\delta,1,\delta,0,-\pi)$, $E(0,5/2-\delta,1,\delta,0,\pi)$, $E(-2,5/2-\delta,1,\delta,0,-\pi)$, $E(-4,5/2-\delta,1,\delta,0,\pi/2)$, and $E(-4,0,5/2,5/2,\pi/2,3\pi/2)$. Rotating this half by $\pi$ yields the second half of the inner boundary. Next, the orientation of the inner boundary is reversed. Finally, all coordinates of the boundary are multiplied with $1/4$. Note that $\delta=1$ yields the domain $C_{4}$. We obtain the following results within $\square_{1.8}$ shown in Figure 20 for $\delta=1/4$, $\delta=1/10$, and $\delta=1/20$. (a) First eigenfunction of $\tilde{C}_{4}$ (b) First eigenfunction of $\hat{C}_{4}$ (c) First eigenfunction of $\bar{C}_{4}$ Figure 20: The eigenfunction corresponding to the first non-trivial interior Neumann eigenvalue $0.578\,402$, $0.668\,373$, and $0.708\,633$ for the domains $\tilde{C}_{4}$, $\hat{C}_{4}$ and $\bar{C}_{4}$, respectively. Surprisingly, the extreme values stay inside of the domain $\tilde{C}_{4}$, $\hat{C}_{4}$, and $\bar{C}_{4}$. Also note that the first non-trivial interior Neumann eigenvalue changes drastically. Interestingly, we can also remove the bumps in the lower part of the teether domain $C_{1}$ and still obtain the extreme values inside the new domain $C_{5}$. The result is shown in Figure 21 within $\square_{1.8}$. The corresponding non-trivial interior Neumann eigenvalue is $0.563\,329$. (a) First eigenfunction of $C_{5}$ Figure 21: The eigenfunction corresponding to the first non-trivial interior Neumann eigenvalue $0.563\,329$ for the domain $C_{5}$. The idea to also remove the bumps on the upper part yields the following results for the new ‘stadium’ domain $S$ as shown in Figure 22 within $\square_{1.8}$. (a) First eigenfunction of $S$ (b) Second eigenfunction of $S$ Figure 22: The eigenfunctions corresponding to the first and second non- trivial interior Neumann eigenvalue $0.755\,416$ and $0.756\,082$ for the domain $S$. Unfortunately, the extreme values are now on the boundary of $S$. The first non-trivial interior Neumann eigenvalues is $0.755\,416$. The second non- trivial eigenvalue is very close (but distinct) $0.756\,082$. We can see that the limiting process of making the bumps of $C_{4}$ smaller (see also $\tilde{C}_{4}$ and $\hat{C}_{4}$) yields the eigenfunction corresponding to the second non-trivial interior Neumann eigenfunction for the domain $S$. This is very unexpected. Further, it seems that the heat flow out of the narrow part of the ‘pipes’ has to be almost without inclination. We use the parametrization $r_{i}(t)\cdotp(\sin(t),\cos(t))^{\top}$ with $r_{i}(t)=a_{i}(1+1/2\sin(\omega t\cdotp\mathbb{I}_{[(\omega/2-1)\pi/\omega,(\omega/2+1)\pi/\omega]\cup[(3\omega/2-1)\pi/\omega,(3\omega/2+1)\pi/\omega]}))$, $t\in[0,2\pi)$ with $a_{1}=1$ for the outer boundary and $a_{2}=0.9$ for the inner boundary. Here $\mathbb{I}$ denotes the indicator function. We use $\omega=4$ and $\omega=8$ to construct the domains $F_{1}$ and $F_{2}$. We obtain the first non-trivial interior Neumann eigenvalue $1.592\,787$ and $1.717\,098$ and the eigenfunction within $\square_{1.6}$. As we can see in Figure 23 the maximum and minimum value are obtained on the boundary. In fact, we have two maxima and two minima. (a) First eigenfunction of $F_{1}$ (b) First eigenfunction of $F_{2}$ Figure 23: The eigenfunction corresponding to the first non-trivial interior Neumann eigenvalue $1.592\,787$ and $1.717\,098$ for the domains $F_{1}$ and $F_{2}$, respectively. Next, we show in Table 6 for the different examples that show a failure of the hot spots conjecture the following information: The domain under consideration, the location of the global maximum and minimum inside the domain, and the ratio $\aleph_{\textit{max}}$ and $\aleph_{\textit{min}}$ which are defined as the ratio of the maximum inside the domain divided by the the maximum on the boundary and likewise for the minimum. All ratios will be larger than one, but we are interested in how close they are to one. Recall that the hot spots conjecture fails if we have $\aleph_{\textit{max}}>1$ and/or $\aleph_{\textit{min}}>1$. The maximum and minimum are calculated through the Matlab minimization routine fminsearch by passing the function given in (2) to a tolerance of $10^{-10}$ for the step size and the difference of a function evaluation. For the first two domains, we use as starting value $(0,0.6)^{\top}$ and $(0,-0.6)^{\top}$, respectively. For the next three domains, we use $(0,0.69)^{\top}$ and $(0,-0.69)^{\top}$ and for the last domain, we use $(0,0.6875)^{\top}$ and $(0,-0.6875)^{\top}$. Table 6: Location of the maximum and minimum inside the domain along with the ratios $\aleph_{\textit{max}}>1$ and/or $\aleph_{\textit{min}}>1$ for various domains $D$ that fail the hot spots conjecture. * $D$ | location max | location min | $\aleph_{\textit{max}}$ | $\aleph_{\textit{min}}$ ---|---|---|---|--- $C_{1}$ | $(-3.805_{-8},6.877_{-1})^{\top}$ | $(3.299_{-8},-6.877_{-1})^{\top}$ | $1+1.221_{-4}$ | $1+1.221_{-4}$ $C_{4}$ | $(8.126_{-8},6.875_{-1})^{\top}$ | $(-3.893_{-8},-6.875_{-1})^{\top}$ | $1+1.196_{-5}$ | $1+1.196_{-5}$ $\tilde{C}_{4}$ | $(-2.998_{-8},6.875_{-1})^{\top}$ | $(-2.028_{-8},-6.875_{-1})^{\top}$ | $1+7.130_{-6}$ | $1+7.118_{-6}$ $\hat{C}_{4}$ | $(3.530_{-8},6.875_{-1})^{\top}$ | $(2.824_{-8},-6.875_{-1})^{\top}$ | $1+3.808_{-6}$ | $1+3.802_{-6}$ $\bar{C}_{4}$ | $(-4.580_{-8},6.875_{-1})^{\top}$ | $(-4.787_{-8},-6.875_{-1})^{\top}$ | $1+2.137_{-6}$ | $1+2.138_{-6}$ $C_{5}$ | $(-1.968_{-7},6.877_{-1})^{\top}$ | ————— | $1+1.438_{-3}$ | ————— As we can see the maximum and minimum are approximately located on the $y$-axis each centered between the bumps for the first four domains. Interestingly, we obtain for the ratios $1+\epsilon$ with very small $\epsilon>0$. The parameter $\epsilon$ decreases as the shape gets closer to the ’stadium’ domain. Hence, we might conjecture that there exists a domain with one hole where we have $\aleph_{\textit{max}}=1+\epsilon$ and $\aleph_{\textit{min}}=1+\epsilon$ with $\epsilon\geq\epsilon_{0}>0$ given $\epsilon_{0}$ arbitrarily small. For the last domain, we obtain the largest value for $\epsilon$. Precisely, we obtain $\epsilon_{0}>1+10^{-3}$. However, the location of the minimum might be on the boundary (or on the complete part of the $y$-axis). Finally, we also show a different easy to construct domain with one hole that fails the hot spots conjecture. The outer boundary is given by straight lines connecting the $21$ points $(0,1)^{\top}$, $(3,1)^{\top}$, $(3,0)^{\top}$, $(4,0)^{\top}$, $(4,1)^{\top}$, $(6,1)^{\top}$, $(6,0)^{\top}$, $(7,0)^{\top}$, $(7,1)^{\top}$, $(10,1)^{\top}$, $(10,6)^{\top}$, $(7,6)^{\top}$, $(7,7)^{\top}$, $(6,7)^{\top}$, $(6,6)^{\top}$, $(4,6)^{\top}$, $(4,7)^{\top}$, $(3,7)^{\top}$, $(3,6)^{\top}$, $(0,6)^{\top}$, and $(0,1)^{\top}$. The inner boundary is given by the straight lines connecting the $21$ points $(1,2)^{\top}$, $(3,2)^{\top}$, $(3,3)^{\top}$, $(4,3)^{\top}$, $(4,2)^{\top}$, $(6,2)^{\top}$, $(6,3)^{\top}$, $(7,3)^{\top}$, $(7,2)^{\top}$, $(9,2)^{\top}$, $(9,5)^{\top}$, $(7,5)^{\top}$, $(7,4)^{\top}$, $(6,4)^{\top}$, $(6,5)^{\top}$, $(4,5)^{\top}$, $(4,4)^{\top}$, $(3,4)^{\top}$, $(3,5)^{\top}$, $(1,5)^{\top}$, and $(1,2)^{\top}$ and then stored in reversed order. The resulting coordinates are first shifted by $-(5,7/2)^{\top}$ and then scaled by $1/2$ to center the ‘brick’ domain $B$ with respect to the origin. Using $600$ collocation points with the parameters $N=24$, $R=1/2$, $\mu=7/10$, and $\ell=20$ yields the eigenfunction shown in Figure 24 within $\square_{3}$ using a resolution of $100\times 100$. (a) First eigenfunction of $B$ Figure 24: The eigenfunction corresponding to the first non-trivial interior Neumann eigenvalue $0.411\,448$ for the brick domains $B$. As we can see, the maximal and minimal value are attained within the domain. Hence, we constructed another domain with one hole that fails the hot spots conjecture. ### 3.4 Domains with more than one hole Finally, we show without further discussion that we are also able to construct examples with more than one hole where the hot spots conjecture fails to hold. Using the teether domain $C_{1}$ and removing a circle centered at $(1/2,11/16)^{\top}$ with radius $\mathfrak{R}$, yields a domain with two holes, say $C_{1,\mathfrak{R}}$. Using $\mathfrak{R}=0.1$, $\mathfrak{R}=0.15$, and $\mathfrak{R}=0.2$, gives the results shown in Figure 25 where we used the same set of parameters as for the results for $C_{1}$. (a) First eigenfunction of $C_{1,0.1}$ (b) First eigenfunction of $C_{1,0.15}$ (c) First eigenfunction of $C_{1,0.2}$ Figure 25: The eigenfunction corresponding to the first non-trivial interior Neumann eigenvalue $0.372\,580$, $0.374\,917$, and $0.378\,131$ for the domains $C_{1,0.1}$, $C_{1,0.15}$ and $C_{1,0.2}$, respectively. As we can see, the maximum and minimum still remain inside the domains $C_{1,0.1}$ and $C_{1,0.15}$. The location of the maximum is slightly shifted to the right whereas the minimum is slightly shifted to the left for the first two cases. If the radius of the removed circle is large, then the maximum goes to the boundary whereas the minimum stays inside the domain $C_{1,0.2}$ as shown in the last contour plot of Figure 25. Next, we construct domains with three holes. Therefore, we use the previous domain $C_{1,\mathfrak{R}}$ and mirror the circular hole at the $y$-axis. This yields the domain $\widetilde{C}_{1,\mathfrak{R}}$. Using the same parameters as before yields the results shown in Figure 26. (a) First eigenfunction of $\widetilde{C}_{1,0.1}$ (b) First eigenfunction of $\widetilde{C}_{1,0.15}$ (c) First eigenfunction of $\widetilde{C}_{1,0.2}$ Figure 26: The eigenfunction corresponding to the first non-trivial interior Neumann eigenvalue $0.374\,552$, $0.379\,670$, and $0.387\,461$ for the domains $\widetilde{C}_{1,0.1}$, $\widetilde{C}_{1,0.15}$ and $\widetilde{C}_{1,0.2}$, respectively. As we can see now, the maximum and minimum remain inside all the three domains $\widetilde{C}_{1,0.1}$, $\widetilde{C}_{1,0.15}$, and $\widetilde{C}_{1,0.2}$. Next, we consider domains with four holes. Therefore, we use the domains $C_{1,0.1}$, $C_{1,0.15}$ and $C_{1,0.2}$ and mirror the upper right circular hole at the $x$-axis. This yields the domains $C_{1,0.1}^{\mathrm{mir}}$, $C_{1,0.15}^{\mathrm{mir}}$ and $C_{1,0.2}^{\mathrm{mir}}$, respectively. The results are presented in Figure 27. (a) First eigenfunction of $C_{1,0.1}^{\mathrm{mir}}$ (b) First eigenfunction of $C_{1,0.15}^{\mathrm{mir}}$ (c) First eigenfunction of $C_{1,0.2}^{\mathrm{mir}}$ Figure 27: The eigenfunction corresponding to the first non-trivial interior Neumann eigenvalue $0.376\,406$, $0.383\,781$, and $0.394\,522$ for the domains $C_{1,0.1}^{\mathrm{mir}}$, $C_{1,0.15}^{\mathrm{mir}}$ and $C_{1,0.2}^{\mathrm{mir}}$, respectively. As expected, we obtain the minimal and maximal value inside the domain for the two domains $\widetilde{C}_{1,0.1}^{\mathrm{mir}}$ and $\widetilde{C}_{1,0.15}^{\mathrm{mir}}$ whereas the maximum is on the boundary for the domain $\widetilde{C}_{1,0.2}^{\mathrm{mir}}$. The domains with five holes $\widetilde{C}_{1,0.1}^{\mathrm{mir}}$, $\widetilde{C}_{1,0.15}^{\mathrm{mir}}$ and $\widetilde{C}_{1,0.2}^{\mathrm{mir}}$ are constructed by mirroring the domains $\widetilde{C}_{1,0.1}$, $\widetilde{C}_{1,0.15}$ and $\widetilde{C}_{1,0.2}$ at the $x$-axis. The results are shown in Figure 28. (a) First eigenfunction of $\widetilde{C}_{1,0.1}^{\mathrm{mir}}$ (b) First eigenfunction of $\widetilde{C}_{1,0.15}^{\mathrm{mir}}$ (c) First eigenfunction of $\widetilde{C}_{1,0.2}^{\mathrm{mir}}$ Figure 28: The eigenfunction corresponding to the first non-trivial interior Neumann eigenvalue $0.378\,360$, $0.388\,438$, and $0.403\,504$ for the domains $\widetilde{C}_{1,0.1}^{\mathrm{mir}}$, $\widetilde{C}_{1,0.15}^{\mathrm{mir}}$ and $\widetilde{C}_{1,0.2}^{\mathrm{mir}}$, respectively. As we can see, the maximal and minimal values are attained inside all the considered domains with five holes. The extension for the construction of domains having more than five holes which do not satisfy the hot spots conjecture is now straightforward. ## 4 Summary and outlook In this paper, a detailed description is given on how to compute the first non-trivial eigenvalue and its corresponding eigenfunction for the Laplace equation with Neumann boundary condition for a given domain with one hole. The problem is reformulated as a non-linear eigenvalue problem involving boundary integral equations thus reducing a two-dimensional problem to a one- dimensional problem. Due to superconvergence we are able to achieve highly accurate approximations both for the eigenvalue and the eigenfunction. With this method at hand, we can compute the eigenvalue and eigenfunction for several different constructed domains. This gives the possibility to find domains with one hole failing the hot spots conjecture and investigate the influence of varying the domain. Some interesting observation can be made such as that the ratio between the maximal/minimal value inside the domain and the maximal/minimal value on the boundary can be $1+10^{-3}$. The Matlab codes including the produced data are available at github https://github.com/kleefeld80/hotspots and researchers can run it on their own constructed domains and reproduce the numerical results within this article. This might give new ideas whether one can find assumptions in order to prove or disprove the hot spots conjecture. The extension for domains with more than one hole is straightforward. For the sake of completeness they are given at the end of the numerical results section for domains with up to five holes, but without detailed discussion. It would be interesting to check whether it is possible to construct three- dimensional domains with one hole that fail the hot spots conjecture, too. The software for a domain without a hole would already be available and only needs to be extended (see [22, 20]). The consideration of other partial differential equations in two or three dimensions whose fundamental solution is known together with Neumann boundary condition could be numerically investigated as well. The author thanks Prof. Stefan Steinerberger from the University of Washington, Seattle (USA) for the fruitful discussions during the preparation of the manuscript. ## References ## References * [1] D. Abele and A. Kleefeld. New numerical results for the optimization of Neumann eigenvalues. In C. Constanda, editor, Computational and Analytic Methods in Science and Engineering, pages 1–20. Birkhäuser, 2020. * [2] E. O. Asante-Asamani, A. Kleefeld, and B. A. Wade. A second-order exponential time differencing scheme for non-linear reaction-diffusion systems with dimensional splitting. J. Comput. Phys., 415:109490, 2020. * [3] R. Atar. Invariant wedges for a two-point reflecting Brownian motion and the “hot spots” problem. Electronic Journal of Probability, 6(18):1–19, 2001. * [4] R. Atar and K. Burdzy. On Neumann eigenfunctions in lip domains. Journal of the American Mathematical Society, 17(2):243–265, 2004\. * [5] K. E. Atkinson. The Numerical Solution of Integral Equations of the Second Kind. Cambridge University Press, 1997. * [6] R. Bañuelos and K. Burdzy. On the “hot spots” conjecture of J. Rauch. Journal of Functional Analysis, 164:1–33, 1999. * [7] R. F. Bass and K. Burdzy. Fiber Brownian motion and the “hot spots” problem. Duke Mathematical Journal, 105(1):25–58, 2000. * [8] W.-J. Beyn. An integral method for solving nonlinear eigenvalue problems. Linear Algebra and its Applications, 436:3839–3863, 2012. * [9] K. Burdzy. The hot spots problem in planar domains with one hole. Duke Mathematical Journal, 129(3):481–502, 2005. * [10] K. Burdzy and W. Werner. A counterexample to the “hot spots” conjecture. Annals of Mathematics, 149(1):309–317, 1999. * [11] F. Cakoni and R. Kress. A boundary integral equation method for the transmission eigenvalue problem. Applicable Analysis, 96(1):23–38, 2017. * [12] D. Colton and R. Kress. Inverse acoustic and electromagnetic scattering theory. Springer, 3rd edition, 2013. * [13] P. Freitas. Closed nodal lines and interior hot spots of the second eigenfunction of the Laplacian on surfaces. Indiana University Mathematics Journal, 51(2):305–316, 2002. * [14] A. Gilette, C. Gross, and K. Plackowski. Numerical studies of serendipity amd tensor product elements for eigenvalue problems. Involve: A Journal of Mathematics, 11(4):661–678, 2018. * [15] D. S. Grebenkov and B.-T. Nguyen. Geometrical structure of Laplacian eigenfunctions. SIAM Review, 55(4):601–667, 2013. * [16] R. Hempel, L. A. Seco, and B. Simon. The essential spectrum of Neumann Laplacians on some bounded singular domains. Journal of Functional Analysis, 102(2):448–483, 1991. * [17] D. Jerison and N. Nadirashvili. The “hot spots” conjecture for domains with two axes of symmetry. Journal of the American Mathematical Society, 13(4):741–772, 2000\. * [18] C. Judge and S. Mondal. Euclidean triangles have no hot spots. Annals of Mathematics, 191(1):167–211, 2020. * [19] B. Kawohl. Rearrangements and Convexity of Level Sets in PDE. Lecture Notes in Mathematics. Springer, 1985. * [20] A. Kleefeld. Numerical methods for acoustic and electromagnetic scattering: Transmission boundary-value problems, interior transmission eigenvalues, and the factorization method. Habilitation thesis, Brandenburg University of Technology Cottbus - Senftenberg, Cottbus, 2015. * [21] A. Kleefeld. Shape optimization for interior Neumann and transmission eigenvalues. In C. Constanda and P. Harris, editors, Integral Methods in Science and Engineering, pages 185–196. Springer, 2019. * [22] A. Kleefeld and T.-C. Lin. Boundary element collocation method for solving the exterior Neumann problem for Helmholtz’s equation in three dimensions. Electronic Transactions on Numerical Analysis, 39:113–143, 2012\. * [23] A. Kleefeld and T.-C. Lin. A global Galerkin method for solving the exterior Neumann problem for the Helmholtz equation using Panich’s integral equation approach. SIAM Journal on Scientific Compututing, 35(3):A1709–A1735, 2013\. * [24] A. Kleefeld and L. Pieronek. The method of fundamental solutions for computing acoustic interior transmission eigenvalues. Inverse Problems, 34(3):035007, 2018. * [25] D. Krejčiřík and M. Tušek. Location of hot spots in thin curved strips. Journal of Differential Equations, 266(6):2953–2977, 2019. * [26] J. C. Lagarias, J. A. Reeds, M. H. Wright, and P. E. Wright. Convergence properties of the Nelder–Mead simplex method in low dimensions. SIAM Journal on Optimization, 9(1):112–147, 1998. * [27] R. R. Lederman and S. Steinerberger. Extreme values of the Fiedler vector on trees. arXiv 1912.08327, 2019. * [28] W. McLean. Strongly Elliptic Systems and Boundary Integral Operators. Cambridge University Press, 2000. * [29] Y. Miyamoto. The “hot spots” conjecture for a certain class of planar convex domains. Journal of Mathematical Physics, 50(10):103530, 2009. * [30] Y. Miyamoto. A planar convex domain with many isolated “hot spots” on the boundary. Japan Journal of Industrial and Applied Mathematics, 30:145–164, 2013. * [31] M. N. Pascu. Scaling coupling of reflecting Brownian motions and the hot spots problem. Transactions of the American Mathematical Society, 354(11):4681–4702, 2002. * [32] J. Rauch. Lecture #1. Five problems: An introduction to the qualitative theory of partial differential equations. In J. Goldstein, editor, Partial differential equations and related topics, volume 446 of Lecture Notes in Mathematics, pages 355–369. Springer, 1974. * [33] M. Reed and B. Simon. Methods of Modern Mathematical Physics: Vol.: 4. : Analysis of Operators. Academic Press, 1978. * [34] S. Sauter and C. Schwab. Boundary Element Methods, volume 39 of Computational Mathematics. Springer, 2011. * [35] A. F. Seybert, B. Soenarko, F. J. Rizzo, and D. J. Shippy. An advance computational method for radiation and scattering of acoustic waves in three dimensions. Journal of the Acoustical Society of America, 77(2):362–368, 1985\. * [36] L. F. Shampine. Vectorized adaptive quadrature in MATLAB. Journal of Computational and Applied Mathematics, 211(2):131–140, 2008. * [37] B. Siudeja. Hot spots conjecture for a class of acute triangles. Mathematische Zeitschrift, 208:783–806, 2015. * [38] O. Steinbach and G. Unger. Convergence analysis of a Galerkin boundary element method for the Dirichlet Laplacian eigenvalue problem. SIAM Journal on Numerical Analysis, 50(2):710–728, 2012. * [39] S. Steinerberger. Hot spots in convex domains are in the tips (up to an inradius). Communications in Partial Differential Equations, 45(6):641–654, 2020. * [40] C. C. Tsai, D. L. Young, C. W. Chen, and C. M. Fan. The method of fundamental solutions for eigenproblems in domains with and without interior holes. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, 462(2069):1443–1466, 2006. * [41] J. Yang, B. Zhang, and H. Zhang. The factorization method for reconstructing a penetrable obstacle with unknown buried objects. SIAM Journal on Applied Mathematics, 73(2):617–635, 2013.
16k
arxiv_papers
2101.01211
# Surgery on $\mathbf{Aut}(F_{2})$ Sylvain Barré and Mikaël Pichot Sylvain Barré, UMR 6205, LMBA, Université de Bretagne-Sud,BP 573, 56017, Vannes, France [email protected] Mikaël Pichot, McGill University, 805 Sherbrooke St W., Montréal, QC H3A 0B9, Canada [email protected] ###### Abstract. We study a geometric construction of certain finite index subgroups of $\operatorname{\mathrm{Aut}}(F_{2})$. We recall that $\operatorname{\mathrm{Aut}}(F_{2})$ admits an isometric properly discontinuous action with compact quotient on a CAT(0) complex $X_{0}$, called the Brady complex, which was introduced in [4] In §1, we show that $\operatorname{\mathrm{Aut}}(F_{2})$ can be presented (virtually) in a very simple manner from a labelling of a flat torus. Starting from a torus of size $6\times n$, for some fixed integer $n\geq 1$ (we shall discuss the case $n=5$ in details), we associate to it, via a “pinching and (systolic) filling” construction, a 2-complex $B_{n}$, with fundamental group a group $G_{n}$ which is of finite index in $\operatorname{\mathrm{Aut}}(F_{2})$. The universal cover $X_{n}$ of $B_{n}$ is a CAT(0) space. We show in §1 that the $X_{n}$’s are pairwise isometric for every $n\geq 1$, and that $X_{0}$ and $X_{n}$ are locally isometric, in the sense that their vertex links are pairwise isometric (Lemma 1.3) for every $n\geq 0$. This implies, by the result below, that $X_{n}$ is isometric to $X_{0}$ for every $n$. In §2, we prove a geometric rigidity theorem for the Brady complex. Roughly speaking, the result states that $X_{0}$ is the “free complex” on one (any) of its face, among the complexes locally isomorphic to $X_{0}$ (see Th. 2.3 for a precise statement). This seems to be a rather special property of $X_{0}$, which is not very often satisfied among the 2-complexes we have studied. Theorem 2.3 implies that every CAT(0) 2-complex locally isometric to $X_{0}$ is isometric to $X_{0}$. The notion of local isomorphism in this statement is slightly more restrictive than requiring the existence of an abstract isometry between the links shown in Lemma 1.3: the two complexes must be of the same (local) type (see §2). The additional conditions are however immediate to verify for the $X_{n}$’s for $n\geq 1$. In §3, we show that every torsion free finite index orientable subgroup of $\operatorname{\mathrm{Aut}}(F_{2})$ can be constructed abstractly by a pinching–and–filling construction, similar to the one given in §1, applied to finitely many tori. It is not clear however how to extend the explicit procedure given in §1 to describe, e.g., the family of torsion free finite index subgroups which are associated with a fixed number of tori. In §4, we explain the origin of the toric presentation given in §1. The present paper can be seen as a continuation of an earlier work [3], in which we introduce a cobordism category $\mathrm{Bord}_{A}$ which can be used to construct groups acting on complexes of a given (local) type $A$. We show below that the techniques of [3] can be applied to the case of $\operatorname{\mathrm{Aut}}(F_{2})$. It gives rise groups acting on complexes of type $\operatorname{\mathrm{Aut}}(F_{2})$ as defined in §2. In the case of $\operatorname{\mathrm{Aut}}(F_{2})$, however, the spaces constructed by surgery in this way must, by the results in §2, be quotients of the Brady complex $X_{0}$, and the resulting fundamental groups, subgroups of $\operatorname{\mathrm{Aut}}(F_{2})$. This is not true of many cobordism categories, and contrasts for example with the categories studied in [3], in which the groups accessible by surgery in a given category (of a fixed local type, e.g., Moebius–Kantor) are typically not pairwise commensurable. Again, the category $\mathrm{Bord}_{A}$ is rather special in this respect when $A$ is the type $\operatorname{\mathrm{Aut}}(F_{2})$. Finally, we give in §5 an example of a CAT(0) 2-complex $X^{\prime}$ which is locally isomorphic to the Brady complex to $X_{0}$, but not isometrically isomorphic to it. Here “locally isomorphic” refers to the fact that the links in $X^{\prime}$ are isometric to the links in the complex $X_{0}$. Acknowledgement. The second author is supported by an NSERC discovery grant. ## 1\. The toric presentation Consider the flat torus $T_{5}$ of size $6\times 5$ defined as follows: Every edge in $T_{5}$ is oriented and labelled. The boundary is identified in the standard way respecting both the orientation and the labelling of the boundary edges. Note that there is a non trivial Dehn twist, that we will denote $\tau_{-6}$, in the vertical direction. We endow the torus $T_{5}$ with the standard Euclidean metric, in which the cells are (as shown in the figure) lozenges with sides of length 1. Here is the basic construction. The figure contains a total of 20 letters. They are denoted $A_{r}$, $B_{r}$, $C_{r}$, $D_{r}$, $0\leq r\leq 4$. Let $L$ be a letter. For every triple $K$ of the form $K:=(L,L^{\prime},L^{\prime\prime})$ consider an oriented triangle with edges labelled by $K$ in the given order. We attach this triangle to the torus $T_{5}$ along its boundary, respecting the orientation and labelling for the boundary edges. This operation, repeated for the twenty triples $K$, defines a 2-complex $B_{5}$. Let $X_{5}:=\widetilde{B_{5}}$ denote the universal cover of $B_{5}$, and $G_{5}:=\pi_{1}(B_{5})$ denote its fundamental group of $B_{5}$. Note that the canonical map $T_{5}\to B_{5}=X_{5}/G_{5}$ is not injective on vertices. One may view $B_{5}$ as a “wrinkled presentation” of the group $G_{5}$ and the map $T_{5}\to B_{5}$ as the “sewing map”. Observe furthermore that every triple $K$ “jumps” on the torus $T_{5}$. (We call $K$ a “knight”.) By definition, a _jump_ on $T_{5}$ is an oriented edge between two vertices of $T_{5}$. Every triple $K$ defines three jumps, from the extremity of an edge in $K$ to the origin of the consecutive edge, modulo 3. ###### Lemma 1.1. Jumps are either disjoint or they share a common support. ###### Proof. A jump associated with a triple $K$ corresponds either to the affine transformation $\begin{cases}x\mapsto x+1\mod 6\\\ y\mapsto y-2\mod 5\end{cases}$ where $x$ is even modulo 6, or to its inverse $\begin{cases}x\mapsto x-1\mod 6\\\ y\mapsto y+2\mod 5\end{cases}$ where $x$ is odd modulo 6. It is not difficult to show that these two transformations do not depend on $K$. Since they are inverse of each other, jumps with a common vertex must have the same support. ∎ In particular, the jumps define an involution $\sigma$ of the vertex set of $T_{5}$, whose orbit partition $T_{5}/\langle\sigma\rangle$ coincides with the vertex set of $X_{5}/G_{5}$. Let us orient the torus $T_{5}$ counterclockwise, and consider the positive labelling $\in\\{1,2,3,4\\}$ of the edges issued from a vertex, where $1$ refers to the positive real axis. The basic construction induces a permutation of the labels associated with every jump. We shall now describe this permutation. ###### Lemma 1.2. The permutation of $\\{1,2,3,4\\}$ associated with the jump $\begin{cases}x\mapsto x+1\mod 6\\\ y\mapsto y-2\mod 5\end{cases}$ is the 4-cycle $(1,2,4,3)$. This shows that the resulting permutation does not depend on $K$; the permutation associated with the opposite jump is the inverse permutation. ###### Proof. Let us for example do the bottom left corner $(0,0)$, which is mapped to $(1,-2)=(1,3)$ under $\sigma$. The corresponding transformation of the counterclockwise labelling reads $\begin{cases}1=A_{0}\\\ 2=A_{4}^{\prime}\\\ 3=D_{3}^{\prime\prime}\\\ 4=A_{3}^{\prime}\end{cases}\mapsto\begin{cases}1=D_{3}\\\ 2=A_{0}^{\prime\prime}\\\ 3=A_{3}\\\ 4=A_{4}^{\prime\prime}\end{cases}$ which corresponds to label permutation $(1,2,4,3)$. ∎ The following shows that $X_{5}$ is locally isomorphic isomorphic to the Brady complex in the (usual) sense that their links are pairwise isomorphic. ###### Lemma 1.3. Every link in $X_{5}$ is isomorphic to the link of the Brady complex. ###### Proof. We shall compute the links in $X_{5}$. That it is isomorphic to that of the Brady complex follows from [4, 7] (see also §2 below). Since the expression for $\sigma$ is independent of the base point in $T_{5}$, it is enough to check the link of the origin. We represent the links at the origin and its image in $T_{5}$ as follows (the drawings respects the scale provided by the angle metric): The prime labels correspond to the image $(1,-2)$. According to the previous lemma, edges in the link of $X_{5}$ corresponds to the permutation $s=(1,2,4,3)$. This defines four additional edges in the above figure: $(x,s(x)^{\prime})$ for every $x\in\\{1,2,3,4\\}$. It is straightforward to check that this graph is the link of $X_{0}$ (compare §2). ∎ The basic construction can be generalized to an arbitrary integer $n\geq 1$ in the following way. Suppose first that $n$ is a sufficiently large integer (e.g., $n\geq 4$). Consider a torus $T_{n}$ of size $6\times n$, where the vertical identification involves a Dehn twist $\tau_{-6}$. For every letter $L$ on $((x,y),(x,y-1))$, where $x$ is even, write labels $L^{\prime}$ and $L^{\prime\prime}$ on, respectively, $((x,y-2),(x+1,y-2))$ and $((x+1,y-3),(x+1,y-4))$; for every letter $L$ on $((1,y),(2,y))$, write labels $L^{\prime}$ and $L^{\prime\prime}$ on, respectively, $((3,y-2),(4,y-2))$ and $((5,y-4),(6,y-4))$. Then the same construction for every triple $K=(L,L^{\prime},L^{\prime\prime})$ on $4n$ letters defines a 2-complex $B_{n}$ which is locally isomorphic to the Brady complex. The notation $B_{n}$ is consistent with the previous notation $B_{5}$. One can further extend this construction of $B_{n}$ to every integer $n\geq 1$ as follows. Let $T_{\infty}:=\varinjlim T_{n}$ (with respect to partial embeddings from a base point) is a cylinder with an obvious action of $\mathbb{Z}$. Since the set of triples (knights) is $\mathbb{Z}$-invariant, this action descends to the basic construction $B_{\infty}$; we let, by definition, $X_{n}$ is the universal cover of the quotient $B_{n}$ of this space by $n\mathbb{Z}$. The notation $B_{n}$ is again consistent. Note however than the description using knights is only clearly visible for $n$ suficiently large ($n\geq 4$ is large enough). This shows the following: ###### Proposition 1.4. The space $X_{n}$ are pairwise isomorphic for $n\geq 1$. ###### Proof. They have a common cover $B_{\infty}$. ∎ In the next section we give a different proof of this fact, which includes isomorphism with the Brady complex $X_{0}$. ## 2\. Geometric rigidity We shall describe the local data by a type (or “local type”), following [3, §4]. In the latter paper we were interested in two sorts of types, simplicial and metric. In the present paper, we shall use _labelled types_ , which add connecting maps to mark the link edges using angle labels as follows (cf. [3, Rem. 4.5]). ###### Definition 2.1. A _labelled type_ (in dimension 2) is 1. (1) a set of graphs (the links); 2. (2) a set of marked shapes, i.e., polygons with filled interior and labelled angles; 3. (3) a set of connecting maps marking every link edge with an angle label. We define the type $\operatorname{\mathrm{Aut}}(F_{2})$ as follows: 1. (1) the link of the Brady complex; it is isomorphic to the graph (see [7, Fig. 6]) The letters are associated (see [7, §3] for details) with the presentation $\displaystyle\langle a,b,c,d,e,f\mid\ $ $\displaystyle ba=ae=eb,\ de=ec=cd,$ $\displaystyle bc=cf=fb,\ df=fa=ad,$ $\displaystyle ca=ac,\ ef=fe\rangle$ of the braid group $B_{4}$. 2. (2) two shapes, a lozenge and an equilateral triangle, labelled in the following way: 3. (3) a connecting map defined by Let $T$ be a labelled type. We say that a 2-complex with labelled face angles is _of type $T$_ it has the correct links and shapes, and the induced marking of the link edges corresponds to a connecting map. A homomorphism between two complexes of type $T$ is a 2-complex homomorphism which preserves the angle labels. The following is straightforward to verify from, e.g., the original description of $X_{0}$ in [4]. ###### Proposition 2.2. The Brady complex $X_{0}$ is of type $\operatorname{\mathrm{Aut}}(F_{2})$. Our main theorem in this section is a converse of this statement. More precisely, we prove that the complex $X_{0}$ satisfies a universal property: it is freely generated by any of its faces. ###### Theorem 2.3. Let $X$ be a 2-complex of type $\operatorname{\mathrm{Aut}}(F_{2})$. Let $S$ be a face in $X_{0}$ and let $f\colon S\to X$ be a label and shape preserving map from $S$ to a face in $X$. There exists a unique homomorphism $\tilde{f}\colon X_{0}\to X$ whose restriction to $S$ coincides with $f$. Furthermore, $\tilde{f}$ is a covering map onto its image. Every 2-complex of type $\operatorname{\mathrm{Aut}}(F_{2})$ can be naturally endowed with a metric structure, in which the triangle face is equilateral and the lozenge a union of two equilateral triangles. By the link condition, every such a complex is locally CAT(0). Every homomorphism between complexes of type $\operatorname{\mathrm{Aut}}(F_{2})$ is isometric, and conversely, every isometry preserves the angle labels. The universal property in the metric situation states that if $f\colon S\to X$ is an isometry between a face $S$ of $X_{0}$ and a face of $X$, then there exists a unique isometry $\tilde{f}\colon X_{0}\to X$ whose restriction to $S$ coincides with $f$. ###### Lemma 2.4. Let $X$ be a 2-complex of type $\operatorname{\mathrm{Aut}}(F_{2})$. Let $S$ be a face in $X_{0}$ and let $f\colon S\to X$ be a map identifying $S$ with a face in $X$. Let $p$ be a vertex of $S$. There exists a unique label preserving extension $\tilde{f}\colon\mathop{\mathrm{St}}_{p}(X_{0})\to X$ of $f$ to the star of $p$ in $X_{0}$. ###### Proof. Let $L_{0}$ denote the link of $p$ in $X_{0}$ and $L$ the link of $f(p)$ in $X$. The map $f$. The map $f$ induces a label preserving map from an edge $e_{0}$ in $L_{0}$ to an edge $e$ in $L$. Since the labels incident to an arbitrary vertex in $L_{0}$ and $L$ are identical, and the labels around a vertex are pairwise distinct, there exists a unique label preserving extension of $f$ to the faces adjacent to $S$ containing $p$. More generally, it is easy to check that the map $e_{0}\to e$ admits a unique label preserving extension to a graph isomorphism $L_{0}\to L$. This shows that $f$ admits a unique label preserving extension $\tilde{f}\colon\mathop{\mathrm{St}}_{p}(X_{0})\to X$. ∎ ###### Proof of Theorem 2.3. We refer to the standard CAT(0) structure on $X_{0}$ defined before the lemma. Let $C$ be a maximal ball in $X_{0}$ centred in $S$ to which $f$ admits an unique extension. We let $f$ denote this extension. Suppose for towards a contradiction that $C$ has a finite radius. Let $p\in\operatorname{\partial}C$. If $p$ belongs to the interior of a face, it is obvious how to extends $f$ to an $\varepsilon$-neighbourhood of $p$ in $X_{0}$. Suppose that $p$ belongs to the interior of an edge $e$, and let $f$ be the unique face containing $e$ and intersecting the interior of $C$. Since both $X_{0}$ and $X$ are of type $\operatorname{\mathrm{Aut}}(F_{2})$, there exists a unique extension of $f$ to an $\varepsilon$-neighbourhood of $p$ in $X_{0}$. Assume now that $p$ is a vertex of $\operatorname{\partial}C$. In this case, $C$ contains a face, and the previous lemma shows that $f$ can be extended in a unique way to an $\varepsilon$-neighbourhood of $p$. Furthermore, that if $p,p^{\prime}$ are two points in $\operatorname{\partial}C$ at distance $\leq 1$, then the two extensions of $f$ from $p$ and $p^{\prime}$ coincide on their intersection. Since $\operatorname{\partial}C$ is compact, this shows that $f$ can be extended to an $\varepsilon$-neighbourhood of $C$, contradicting the maximality of $C$. Finally, $\tilde{f}$ is a covering map by construction. ∎ ###### Corollary 2.5. The spaces $X_{n}$ are pairwise isomorphic for every $n\geq 0$. ###### Proof. Since $X_{n}$ is of type $\operatorname{\mathrm{Aut}}(F_{2})$, we have a covering map $X_{0}\to X_{n}$. Since $X_{n}$ is simply connected, this map is an isomorphism. ∎ ###### Corollary 2.6. The groups $G_{n}$ are of finite index in $\operatorname{\mathrm{Aut}}(F_{2})$. ###### Proof. The special automorphism group $\operatorname{\mathrm{SAut}}(F_{2})$, which is of index 2 in $\operatorname{\mathrm{Aut}}(F_{2})$, acts transitively on the set of triangles in $X_{0}$ by the description in [7]. If $f$ is a triangle in $X_{0}$, and $s$ an element in $G_{n}$, then there exists a unique element $t_{s}\in\operatorname{\mathrm{SAut}}(F_{2})$ whose restriction to $f$ coincide with $s$. By the theorem, $s$ and $T-s$ coincide on $X_{0}$, and the map $s\mapsto t_{s}$ provides an embedding of $G_{n}$ into $\operatorname{\mathrm{SAut}}(F_{2})$. ∎ Another corollary, Theorem 2.8 below, shows that the Brady complex admits a “frame” in the following sense. We recall that a flat plane in $X_{0}$ is an isometric embedding $\mathbb{R}^{2}\operatorname{\hookrightarrow}X_{0}$ of the standard Euclidean plane in $X_{0}$. ###### Definition 2.7. A _frame_ on $X_{0}$ is an orientation, and a labelling by two letters $e$ and $f$, of the edge set of $X_{0}$, such that for every flat plane $\Pi$ in $X_{0}$ which is a union of lozenges, the following holds 1. (1) the ordered set $B_{x}:=(e_{x},f_{x})$ of outgoing edges at a vertex $x$ in $\Pi$, with respective labels $e$ and $f$, forms a basis of $\Pi$, 2. (2) the unique translation of $\Pi$ which takes a vertex $x$ to a vertex $y$ takes the ordered set $B_{x}$ to the ordered set $B_{y}$. Thus a frame is a way to move a basis consistently along the various embeddings $\Pi$ into $X_{0}$. ###### Theorem 2.8. There exists a frame on $X_{0}$. ###### Proof. The map $X_{0}\operatorname{\twoheadrightarrow}B_{5}$ is a covering map. We consider the obvious frame on the torus $T_{5}$, and the induced orientation and labelling of the edge set of $B_{5}$ by the sewing map $T_{5}\to B_{5}$ (which is a bijection on the edge set), and lift the orientation and labelling to $X_{0}$ using the map $X_{0}\operatorname{\twoheadrightarrow}B_{5}$. Since every flat plane in $X_{5}$ maps onto the image of $T_{5}$ in $B_{5}$, this defines a frame on $X_{0}$. ∎ We shall say that a group of automorphisms of $X_{0}$ is _orientable_ if it preserves the frame constructed in Theorem 2.8. Note that every finite index subgroup of $\operatorname{\mathrm{Aut}}(F_{2})$ contains a finite index subgroup which is orientable. ###### Proof. Let $G$ be a finite index subgroup of $\operatorname{\mathrm{Aut}}(F_{2})$. Then the group $G\cap G_{5}$ is of finite index in $\operatorname{\mathrm{Aut}}(F_{2})$: $[\operatorname{\mathrm{Aut}}(F_{2}):G\cap G_{5}]\leq[\operatorname{\mathrm{Aut}}(F_{2}):G][\operatorname{\mathrm{Aut}}(F_{2}):G_{5}].$ Furthermore, $G\cap G_{5}$ is orientable since $G_{5}$ is. ∎ ## 3\. Pinching and filling tori The spaces in §1 are obtained in two steps, by a procedure which can be described as “pinching and systolic filling” starting from a flat torus. Theorem 2.3 shows that every such a construction, using a family of flat tori, will have $X_{0}$ as a universal cover, provided it satisfies a few basic conditions, described in the following proposition. ###### Proposition 3.1. Let $t\geq 1$ be an integer. Suppose that: 1. (1) $T_{1},\ldots T_{t}$ is a finite family of flat tori, endowed with a simplicial metric structure in which every cell is a lozenge with sides of length 1 2. (2) $\sigma$ is a fixed point free involution on the vertex set of $T:=\bigsqcup_{k=1}^{t}T_{k}$ 3. (3) the systolic length in $T^{\prime}:=T/\langle\sigma\rangle$ is 3 4. (4) every edge in $T^{\prime}$ belongs to a unique systole of length 3 5. (5) the systolic filling $B$ of $T^{\prime}$, obtained by attaching isometrically an equilateral triangle to every systole in $T^{\prime}$, is locally CAT(0) (i.e., the link girth in $B$ is $\geq 2\pi$) then $\widetilde{B}\simeq X_{0}$. ###### Proof. By Theorem 2.3 it is enough to prove that $B$ is of type $\operatorname{\mathrm{Aut}}(F_{2})$. Since $\sigma$ is fixed point free, the link at a vertex in $B$ contains a union of two disjoint circles of length $2\pi$. We shall use the notation in Lemma 1.3. Since every edge belongs to a unique systole of length 3, the systolic filling provides an involution $\tau$ of the set $\\{1,2,3,4\\}\sqcup\\{1^{\prime},2^{\prime},3^{\prime},4^{\prime}\\}$ of vertices in the link. Since $B$ is locally CAT(0), and the edge length from the systoles are $\pi/3$, the involution $\tau$ induces a bijection from $\\{1,2,3,4\\}$ to $\\{1^{\prime},2^{\prime},3^{\prime},4^{\prime}\\}$. We may assume without loss of generality that $\tau(1)=2^{\prime}$. By the CAT(0) condition, it follows that $\tau(4)\neq 1^{\prime}$. Suppose that $\tau(4)=4^{\prime}$. Then the distance between $3$ and $3^{\prime}$ is $\leq\pi$, which implies $\tau(2)=3^{\prime}$ and $\tau(3)=1^{\prime}$. In this case, however, the cycle $212^{\prime}3^{\prime}$ is of length $<2\pi$, which is a contradiction. Thus, $\tau(4)\neq 4^{\prime}$. This implies that $\tau(4)=3^{\prime}$. Applying again the CAT(0) condition, we must have $\tau(2)=4^{\prime}$ and $\tau(3)=1^{\prime}$. Labelling the angles of the faces as in §2, the above shows that the link of vertex in $B$ is label isomorphic to the link of type $\operatorname{\mathrm{Aut}}(F_{2})$, where the labels $t$ are associated with the systolic filling. This implies that $B$ is of type $\operatorname{\mathrm{Aut}}(F_{2})$ and therefore that $X_{0}\simeq\tilde{B}_{5}$. ∎ Furthermore, every (sufficiently deep) orientable torsion free subgroup of finite index is constructed in this way: ###### Proposition 3.2. Let $G$ be an orientable torsion free subgroup of finite index in $\operatorname{\mathrm{Aut}}(F_{2})$. Suppose that the injectivity radius of $X_{0}/G$ is $>1$. Then there exists a finite family of tori $T_{1},\ldots,T_{t}$ and a fixed point free involution $\sigma$ on the vertex set of $T:=\sqcup_{k=1}^{t}T_{k}$, such that $T^{\prime}:=T/\langle\sigma\rangle$ satisfies the condition in the previous proposition, and the systolic filling $B$ of $T^{\prime}$ is isometric to $X_{0}/G$. ###### Proof. We say that two edges $e$ and $f$ in $X_{0}$ (or $X_{0}/G$) are equivalent if there exists a gallery $(f_{1},\ldots,f_{n})$ containing them, such that $f_{i}$ is a losenge for every $i$. Let $e$ be an edge in $X_{0}$ and $\tilde{e}$ be a lift of $e$ in $X_{0}$. It is clear that the equivalence class $[\tilde{e}]$ of $\tilde{e}$ maps surjectively onto the equivalence class of $[e]$ under the covering map $\pi\colon X_{0}\operatorname{\twoheadrightarrow}X_{0}/G$. The convex hull $H$ of $[\tilde{e}]$ is isometric to a flat plane tessellated by lozenges. We let $T_{e}^{\prime}$ denote the image of $H$ under $\pi$. Say that a vertex $x\in T_{e}^{\prime}$ is a double point if the link of $T_{e}^{\prime}$ at $x$ is a disjoint union of circles. The map $H\to T_{e}^{\prime}$ factorize through a map $H\to T_{e}\to T_{e}^{\prime}$, where $T_{e}$ is obtained from $T_{e}^{\prime}$ by blowing up every double point. Since $X_{0}/G$ is compact, so is $T_{e}^{\prime}$. Therefore, $T_{e}$ is compact. Since $H\operatorname{\twoheadrightarrow}T_{e}$ is a covering map and $G$ is orientable, it follows that $T_{e}$ is a torus. We let $\sigma_{e}$ be the partially defined involution on $T_{e}$ inducing the quotient map $T_{e}\operatorname{\twoheadrightarrow}T_{e}^{\prime}$. Let $e_{1},\ldots,e_{t}$ be a representative set of equivalence classes of edges in $X_{0}/G$. Associated with the $e_{k}$’s are tori $T_{k}$ and partially defined involution $\sigma_{k}$ on $T_{k}$ such that the edge set of $T_{k}/\langle\sigma\rangle_{k}\subset X_{0}/G$ coincides with the equivalence class of $e_{k}$. Furthermore, for every vertex $x\in T_{k}$ not in the domain of $\sigma_{k}$, there exists a unique $k^{\prime}\neq k$ such that $x$ is a vertex of $T_{k}^{\prime}$. This defines an involution $\sigma_{0}$ on the complement of $\sqcup_{k}\sigma$ to itself on $T:=\sqcup_{k}T_{k}$. This involution is a fixed point free involution on $T$. Since the injectivity radius of $X_{0}/G$ is $>1$, the systolic length of $T/\langle\sigma\rangle$ is $\geq 3$, and therefore $X_{0}/G$ is the systolic completion of $T/\langle\sigma\rangle$ in the sense of the previous proposition. ∎ The map $T\to B$ can be viewed as a structure of “space with jumps” on the torus (or union of tori) $T$. The geodesics with respect to such a structure in $T$ are allowed to jump between certain transverse codimension 1 subspaces they cross (in the present situation, it is the 1-skeleton, which are the sides of the triangles). The length of the jump, and the incidence angles are described by the geometry of the added triangles. A pinching occurs along a codimension 2 subspaces (intersection of codimension 1 subspaces), which are singular sets, corresponding to instantaneous jumps of a geodesic between two points in $T$. We will not attempt to formalize this notion further in the present paper. ###### Remark 3.3. The number $t$ of tori, and the geometric parameters of the individual tori, provide conjugacy invariants for the given subgroup $G$. As mentioned in the introduction, the description of the family of subgroups with a prescribed invariant, e.g., the torsion free finite index subgroups $G$ of $\operatorname{\mathrm{Aut}}(F_{2})$ with a given torus number $t(G)$, seems rather involved however. ## 4\. A group cobordism for $\operatorname{\mathrm{Aut}}(F_{2})$ In this section we show that the surgery techniques from [3] which were used to construct (in many cases, infinitely many) groups of a given type, can be applied to the group $\operatorname{\mathrm{Aut}}(F_{2})$. (Indeed, this is how the toric presentation in §1 and the groups $G_{n}$ were found.) Let $A$ be a (e.g., labelled) type. A category $\mathrm{Bord}_{A}$ of group cobordisms of type $A$ can be defined as follows. The objects in this category are called collars, and the arrows, group cobordisms; in the present paper we only discuss the case where $A$ is the type $\operatorname{\mathrm{Aut}}(F_{2})$ defined in §2. Let us first review the notion of collar. An (abstract) _open collar_ is a topological space of the form $H\times(0,1)$ where $H$ is a graph (not necessarily connected). If $X$ is a 2-complex, an open collar in $X$ is, by definition, an embedding $C\colon H\times(0,1)\operatorname{\hookrightarrow}X$. We shall refer to the domain $H\times(0,1)$ as the abstract collar defining $C$. The _dual_ of an open collar of $X$ is the open collar $C^{\prime}\colon H\times(0,1)\operatorname{\hookrightarrow}X$ defined by $C^{\prime}(x,t):=C(x,1-t)$. The _collar closure_ of $C$ the topological closure $\overline{C}$ of the image of $C$ in $X$; the _span_ of $C$ in $X$ is the set $\operatorname{\mathrm{span}}(C)$ of vertices of $X$ contained in collar closure of $C$; the _simplicial closure_ of $C$ is is the union of all the open edges and open faces it intersects. As in [3] we only consider collars which are simplicially closed and vertex free. We shall denote by $\mathrm{Bord}_{\operatorname{\mathrm{Aut}}(F_{2})}$ the category of group cobordisms of type $\operatorname{\mathrm{Aut}}(F_{2})$. We construct an object $C$ in $\mathrm{Bord}_{\operatorname{\mathrm{Aut}}(F_{2})}$ as follows. Fix an integer $y\in\mathbb{N}$. We use the notation introduced at the end of §1. We will view $C$ as a “slice” of the cylinder $T_{\infty}$. We fix four letters $A_{y},B_{y},C_{y},D_{y}$ respectively on $((x,y),(x,y-1))$ where $x=0,2,4$ and on $((1,y),(2,y))$. Recall that for every letter $L$ on $((x,y),(x,y-1))$, where $x$ is even, we write labels $L^{\prime}$ and $L^{\prime\prime}$ on, respectively, $((x,y-2),(x+1,y-2))$ and $((x+1,y-3),(x+1,y-4))$, while for a letter $L$ on $((1,y),(2,y))$, we write labels $L^{\prime}$ and $L^{\prime\prime}$ on, respectively, $((3,y-2),(4,y-2))$ and $((5,y-4),(6,y-4))$. By definition, the cylinder $T_{\infty}$ is a quotient of a strip $[0,6]\times\mathbb{R}$ using the twist $\tau_{-6}$ in the vertical direction. Recall that a gallery is a sequence of faces $(f_{1},\ldots,f_{n})$ such that $f_{i}\cap f_{i+1}$ is an edge. We say that a gallery in $T_{\infty}$ is _generating_ if it is closed (i.e., cyclic permutations remain galleries) and homotopic to an element generating $\pi_{1}(T_{\infty})$ . ###### Lemma 4.1. The minimal generating gallery has length $n=12$. ###### Proof. Indeed, writing $T_{\infty}$ as a quotient of a strip $[0,6]\times\mathbb{R}$ of size $6\times\infty$ by $\tau_{-6}$, the gallery distance between a boundary edge on $\\{0\\}\times\mathbb{R}$ and its image by $\tau_{-6}$ in $\\{6\\}\times\mathbb{R}$ is $12=6+6$. ∎ The collar $C$ will be built from a minimal generating gallery on $T_{\infty}$. Starting from the edge labelled $A_{y}$, the gallery is defined by the succession of edges $f_{i}\cap f_{i+1}$. The edges have the following labels: $A_{y},A_{y+1}^{\prime},A_{y}^{\prime},A_{y+1}^{\prime\prime},B_{y-2},B_{y-1}^{\prime},B_{y-2}^{\prime},B_{y-1}^{\prime\prime},C_{y-4},C_{y-3}^{\prime},C_{y-4}^{\prime},C_{y-3}^{\prime\prime}$ Note the corresponding gallery $(f_{1},\ldots,f_{12})$ is closed: every change of letter occurs with a drop of $-2$ for a total drop of $-6$, which is consistent with $\tau_{-6}$. This defines a “zig-zag” gallery generating $\pi_{1}(T_{\infty})$. As a topological space the gallery $(f_{1},\ldots,f_{12})$ is homeomorphic to $[0,1]\times\mathbb{S}^{1}$. We shall refer to the gallery minus its boundary as open. ###### Definition 4.2. Let $C$ be the union of 1. (1) the image of the open generating gallery $(f_{1},\ldots,f_{12})$ in the basic construction $B_{\infty}$. 2. (2) the triangles in $B_{\infty}$ associated with the following six triples (knights) $K=(L,L^{\prime},L^{\prime\prime})$ on the letters $L=A_{y+1},A_{y},B_{y-1},B_{y-2},C_{y-3},C_{y-4}$ where every triangle associated with a triple $K$ is semi-open, in the sense that it does not contain the (unique) edge not belonging to the image of the gallery. ###### Lemma 4.3. $C$ is a product space. ###### Proof. It is clear that the open gallery is a product space homeomorphic to $(0,1)\times\mathbb{S}^{1}$. Under this identification, the added triples $K$ define a space of the form $(0,1)\times H$ where $H$ is a finite graph (the nerve) obtained by adding 6 edges to $\mathbb{S}^{1}$. ∎ One can of course give an explicit description of $H$: ###### Lemma 4.4. The graph $H$ is isomorphic to the Cayley graph of $\mathbb{Z}/12\mathbb{Z}$, with respect to 1, together with an additional edge $(n,n+2)$ for every $n\equiv 0,1\mod 4$. Therefore, we may view $C$ as a open collar in $B_{\infty}$ under the identity mapping $C\to B_{\infty}$. ###### Lemma 4.5. $C$ is a full collar in $B_{\infty}$ ###### Proof. Every open edge $e=f_{i}\cap f_{i+1}$ in $C$ belongs to a (unique) triple $K$, and therefore every point in $e$ has an open neighbourhood included in $C$. ∎ Since $B_{\infty}$ is a complex of type $\operatorname{\mathrm{Aut}}(F_{2})$, the above shows that the isomorphism class of $C$ is an object in the category $\mathrm{Bord}_{\operatorname{\mathrm{Aut}}(F_{2})}$. The arrows in $\mathrm{Bord}_{\operatorname{\mathrm{Aut}}(F_{2})}$ are group cobordisms: ###### Definition 4.6. A _group cobordism_ is a 2-complex $B$ together with a pair $(C,D)$ of collars of $B$ whose boundaries $\operatorname{\partial}^{-}C$ and $\operatorname{\partial}^{+}D$ form a partition of the topological boundary of $B$: $\operatorname{\partial}B=\operatorname{\partial}^{-}C\sqcup\operatorname{\partial}^{+}D.$ Let us construct the group cobordism $B$ of type $\operatorname{\mathrm{Aut}}(F_{2})$. The collar $C$ depends on $y\in\mathbb{N}$, however, it is clear that $C_{y}\simeq C_{y+1}$. The cobordism $B$ has $C$ as domain and codomain. ###### Definition 4.7. Let $B$ be the union of 1. (1) $C_{y}\cup C_{y+1}$ 2. (2) the closed triangle in $B_{\infty}$ associated with the triple $K=(L,L^{\prime},L^{\prime\prime})$ on the letters $L=D_{y-2}$. Again, $B$ depends on $y$, where $B_{y}$ is isomorphic to $B_{y+1}$ and defines a unique arrow, again denoted $B$, in $\mathrm{Bord}_{\operatorname{\mathrm{Aut}}(F_{2})}$. The inclusion map $L_{B},R_{B}\colon C\to B$ (left and right collar boundary) and the obvious inclusion of $C$ as $C_{y}$ and $C_{y+1}$. In particular: ###### Theorem 4.8. The map taking 1 to $B$ induces a unital inclusion $\mathbb{N}\to\mathrm{Bord}_{\operatorname{\mathrm{Aut}}(F_{2})}$. ###### Proof. Indeed, $B^{\circ n}\neq B^{\circ m}$ if $n\neq m$, where $B^{\circ n}$ refers to the $n$-fold composition $B\circ\cdots\circ B$ in $\mathrm{Bord}_{\operatorname{\mathrm{Aut}}(F_{2})}$. ∎ In the language of [3], the above shows the following: ###### Theorem 4.9. $\operatorname{\mathrm{Aut}}(F_{2})$ is virtually accessible by surgery. This means that $\operatorname{\mathrm{Aut}}(F_{2})$ admits a finite index subgroup which is the fundamental group of a complex obtained by a surgery construction in a cobordism category (see [3, §10]). Here the groups $G_{n}$ are of finite index in $\operatorname{\mathrm{Aut}}(F_{2})$ and the fundamental groups of the complexes $B_{n}$, which are of type $\operatorname{\mathrm{Aut}}(F_{2})$ defined by a surgery construction in $\mathrm{Bord}_{\operatorname{\mathrm{Aut}}(F_{2})}$. We take this opportunity to make a correction to [2, Lemma 17]. At the bottom of the page it is stated that “there are two extensions of this section”: it should be “three extensions”. Namely, in the first case (when the lozenges on the south-east triangles are oriented pointing south) one extension is the 3-strip, as indicated, which amounts to extending the lozenges with two triangles. A third sort of extension uses lozenges instead. In this case, the lozenges belong to a (using the terminology in [2]) semi-infinite $\diamond$-strip of type $2\times\infty$. This can be visualized using the surgery construction above: starting from the closed triangle defined in $B$ above, Def. 4.7, (2), one may use three lozenges belonging to a single collar (either all belonging to $C_{y}$, or all in $C_{y+1}$) which can be extended into three semi-infinite $\diamond$-strip of type $2\times\infty$ in the universal cover (so the resulting puzzle has an order 3 symmetry). ## 5\. Complements to Theorem 2.3 We conclude some remarks on Theorem 2.3, regarding spaces locally isometric to $X_{0}$. It is an interesting exercise to construct groups acting freely uniformly on a CAT(0) 2-complex locally isometric to the 2-complex $X_{0}$ of $\operatorname{\mathrm{Aut}}(F_{2})$ (but not isometric to it), in the sense that their link are isometric to the link of $X_{0}$. In the present section, we provide one example. By Theorem 2.3 such a complex $X^{\prime}$ is not of type $\operatorname{\mathrm{Aut}}(F_{2})$. The example will be of the following type. Let $A$ denote the metric type (i.e., a set of metric graphs, and a sets of shapes) defined by: 1. (1) Graph: the link of the Brady complex with the angular metric (see §2). 2. (2) Shapes: an equilateral triangle, and an hexagon with sides of length 1. Both are viewed as standard polygons in the Euclidean plane with the induced metric. By definition, every CAT(0) 2-complex of type $A$ is locally isometric to $X_{0}$ but not isometric to it. ###### Proposition 5.1. There exists a group $G^{\prime}$ acting freely uniformly isometrically on a CAT(0) 2-complex $X^{\prime}$ of type $A$. The construction is as follows. We begin with a single hexagon on a set of 6 vertices, which we denote $\\{1^{+},2^{-},3^{+},1^{-},2^{+},3^{-}\\},$ and edges labelled from 1 to 6 in a cyclic order as follows. We shall realize these 6 vertices as the vertex set of a locally CAT(0) space of type $A$, containing the hexagon as a face. Consider additional edges between these vertices: 1. two edges between $i^{-}$ and $i^{+}$ 2. two edges between $i^{+}$ and $(i+1)^{+}$ 3. two edges between $i^{-}$ and $(i+1)^{-}$ (where $i$ is an index modulo 3) organized and named as follows: Together with the edges of the hexagon, this defines a regular graph of order 8. Note that this graph has a natural symmetry $\sigma$ of order 3 taking evert letter $l_{i}$ to the letter $l_{i+1}$ (modulo 3). Consider the following hexagon and four triangles 1. $(a_{1},c_{2},f_{1},e_{1},d_{2},b_{1})$ 2. $(d_{1},a_{1},4)$ $(f_{1},d_{1},1^{-})$ 3. $(b_{1},c_{1},4^{-})$ $(c_{1},e_{1},1)$ Together with their images under $\sigma$, this defines 3 triangles and 12 triangles. In addition to these triangle add the four triangles: 1. $(a_{1},a_{2},a_{3})$ $(b_{1},b_{2},b_{3})$ 2. $(e_{1},e_{2},e_{3})$ $(f_{1},f_{2},f_{3})$ This defines a 2-complex, whose fundamental group is $G^{\prime}$ and universal cover $X^{\prime}$. It is immediate to check that: ###### Lemma 5.2. The link of $X^{\prime}$ is isometric to the link of $X_{0}$. ###### Proof. Note that it is enough to check a single vertex, since $\sigma$ and the reflection with respect to the horizontal axis extend to the 2-complex. We may index the vertex set of the link by $a_{1},b_{1}a_{3},b_{3},c_{1},d_{1},1,2$, where the latter two numbers are associated with the initial hexagon. There are four hexagon edges: $(a_{1},b_{1})$, $(a_{3},c_{1})$, $(b_{3},d_{1})$, and $(1,2)$ (for the first hexagon). One can then draw the edge associated with triangles: these are $(a_{1},d_{1})$, $(d_{1},1)$, $(b_{1},c_{1})$, $(c_{1},1)$, from the images under $\sigma$: $(b_{3},2)$, $(a_{3},2)$, and finally, $(a_{1},a_{3})$ and $(b_{1},b_{3})$. It is not difficult to show that this graph is isometric to the link of $X_{0}$. ∎ We also note that: ###### Proposition 5.3. $\operatorname{\mathrm{Aut}}(X^{\prime})$ is vertex transitive. This is part of the argument in the previous lemma. ## References * [1] Barré, S., Pichot, M., Intermediate rank and property RD, arXiv:0710.1514. * [2] Barré, S., Pichot, M., $\operatorname{\mathrm{Aut}}(F_{2})$ puzzles. Geometriae Dedicata, 199(1), 225-246, (2019). * [3] Barré, S., Pichot, M., Surgery on discrete groups. Preprint. * [4] T. Brady. Automatic structures on $\operatorname{\mathrm{Aut}}(F_{2})$. Archiv der Mathematik, 63(2):97–102, 1994. * [5] T. Brady. Artin groups of finite type with three generators. The Michigan Mathematical Journal, 47(2):313–324, 2000. * [6] Bridson, M.R. and Haefliger, A., Metric spaces of non-positive curvature (Vol. 319). Springer Science & Business Media (2013). * [7] J. Crisp, L. Paoluzzi. On the classification of CAT(0) structures for the 4-string braid group. The Michigan Mathematical Journal, 53(1):133–163, 2005.
8k
arxiv_papers
2101.01216
# First Integrals and symmetries of nonholonomic systems P. Balseiro 11footnotemark: 1 Paula Balseiro Nicola Sansonetto Universidade Federal Fluminense, Instituto de Matemática, Rua Mario Santos Braga S/N, 24020-140, Niteroi, Rio de Janeiro, Brazil. E-mail: [email protected] Università degli Studi di Verona, Dipartimento di Informatica, strada le Grazie 15, 37134 Verona, Italy. E-mail: [email protected] ###### Abstract In nonholonomic mechanics, the presence of constraints in the velocities breaks the well-understood link between symmetries and first integrals of holonomic systems, expressed in Noether’s Theorem. However there is a known special class of first integrals of nonholonomic systems generated by vector fields tangent to the group orbits, called horizontal gauge momenta, that suggest that some version of this link should still hold. In this paper we give sufficient conditions for the existence of horizontal gauge momenta; our analysis leads to a constructive method and a precise estimate of their number, with fundamental consequences to the integrability of some nonholonomic systems as well as their hamiltonization. We apply our results to three paradigmatic examples: the snakeboard, a solid of revolution rolling without sliding on a plane and a heavy homogeneous ball that rolls without sliding inside a convex surface of revolution. In particular, for the snakeboard we show the existence of a new horizontal gauge momentum that reveals new aspects of its integrability. ###### Contents 1. 1 Introduction 1. 1.1 Symmetries and first integrals 2. 1.2 Main results of the paper 3. 1.3 Outline of the paper 2. 2 Initial Setting: Nonholonomic systems and horizontal gauge momenta 1. 2.1 Nonholonomic systems with symmetries 2. 2.2 Horizontal gauge momenta 3. 3 A momentum equation 1. 3.1 An intrinsic momentum equation 2. 3.2 The “strong invariance” condition on the kinetic energy 3. 3.3 Determining the horizontal gauge momenta (in global coordinates) 4. 3.4 A geometric interpretation: horizontal gauge symmetries as parallel sections 4. 4 Existence of horizontal gauge momenta and related consequences on the dynamics and geometry of the systems 1. 4.1 Integrability and hamiltonization of the reduced dynamics 2. 4.2 Horizontal gauge momenta and broad integrability of the complete system 5. 5 Examples 1. 5.1 The snakeboard 2. 5.2 Solids of Revolution 3. 5.3 A homogeneous ball on a surface of revolution 4. 5.4 Comments on the hypothesis of Theorem 3.15: examples and counterexamples 6. A Appendix: Almost Poisson brackets and gauge transformations 7. B Appendix: Some facts on reconstruction theory ## 1 Introduction ### 1.1 Symmetries and first integrals The existence of first integrals plays a fundamental role in the study of dynamical systems and it influences many aspects of their behavior, in particular their integrability. It is well-known that in holonomic systems with symmetries (described by a suitable action of a Lie group), Noether Theorem ensures that the components of the momentum map are first integrals of the dynamics. When we impose constraints in the velocities, we obtain the so- called nonholonomic systems [52, 50, 11, 23]: mechanical systems on a manifold $Q$ where the permitted velocities define a nonintegrable constant-rank distribution $D\subset TQ$ on $Q$. One way to see the non lagrangian/hamiltonian character of these systems is that the presence of symmetries does not necessarily lead to first integrals (see [50, 22, 43, 12, 17, 57, 45, 11, 20, 60, 32, 23, 30]); in particular, the components of the momentum map need not be conserved by the dynamics. On the other hand, it has been observed that there are many first integrals linear in the momenta that are generated by vector fields that are not infinitesimal generators of the symmetry action, but are still tangent to the group orbits [9, 60, 28, 31, 7]. The research of a possible link between the presence of symmetries and the existence of first integrals in nonholonomic systems –if any exists– has been an active field of research in the last thirty years [10, 22, 43, 9, 12, 17, 57, 45, 60, 32, 30], and it dates back at least to the fifties with the work of Agostinelli [1] and fifteen years later with the works of Iliev [40, 41]. More recently new tools and techniques, with a strong relation with the symmetries of the system, have been introduced in order to understand the dynamical and geometrical aspects of nonholonomic systems, such as nonholonomic momentum map, momentum equations, and gauge momenta. In the present paper, we investigate the existence of first integrals of the nonholonomic dynamics coming from the presence of symmetries using these tools and the so-called gauge method, introduced in [9] and further developed in [28, 29, 31]. ### 1.2 Main results of the paper Given a nonholonomic system with a symmetry described by the (free and proper) action of a Lie group $G$, we consider functions of type $J_{\xi}=\langle J,\xi\rangle$, where $J$ is the canonical momentum map and $\xi$ is a section of the bundle $Q\times\mathfrak{g}\to Q$, with the property that the infinitesimal generator of each $\xi(q)$, $q\in Q$, is tangent to the constraint distribution. Theorem 3.15 gives conditions on nonholonomic systems ensuring that the presence of symmetries induces the existence of first integrals of type $J_{\xi}$, called horizontal gauge momenta (while the section $\xi$ is called a horizontal gauge symmetry) [9]. Denoting by $k$ the rank of the distribution $S$ given by the intersection of the constraint distribution $D$ with the tangent space to the $G$-orbits, we characterize the nonholonomic systems that admit exactly $k$ horizontal gauge momenta that are functionally independent and $G$-invariant. Precisely, we write an explicit system of linear ordinary differential equations whose solutions give rise to the $k$ horizontal gauge momenta. These results are based on an intrinsic momentum equation that characterizes the horizontal gauge momenta. We also show that this intrinsic momentum equation can be regarded as a parallel transport equation, that is, we prove that a horizontal gauge symmetry $\xi$ is a parallel section along the nonholonomic dynamics on $Q$, with respect to an affine connection defined on (a subbundle of) $Q\times\mathfrak{g}\to Q$. This affine connection arises by adding to the Levi-Civita connection a bilinear form that carries the information related to the system of differential equations determining the horizontal gauge momenta. The fact that we know the exact number of horizontal gauge momenta and have a systematic way of constructing them has fundamental consequences on the geometry and dynamics of nonholonomic systems, see e.g. [38, 59, 16, 27, 26, 23, 4, 37]. Under the hypotheses of Theorem 3.15 we first show that the reduced dynamics is integrable by quadratures and, if some compactness issues are satisfied, it is indeed periodic (Theorem 4.4). From a more geometric point of view, if the reduced dynamics is periodic, we have that the reduced space inherits the structure of an $S^{1}$-principal bundle outside the equilibria. Second, we prove (Theorem 4.5) the hamiltonization of these nonholonomic systems (see also [37, 8]); precisely the existence of $k=\textup{rank}(S)$ horizontal gauge momenta and the fact that $\textup{dim}(Q/G)=1$ guarantee the existence of a Poisson bracket on the reduced space $\mathcal{M}/G$ that describes the reduced dynamics. This bracket is constructed using a dynamical gauge transformation by a 2-form that we also show to be related to the momentum equation. Third, when the reduced dynamics is periodic, we can obtain information on the complete dynamics (Theorem 4.12). In particular, if the symmetry group $G$ is compact, the reconstructed dynamics is quasi-periodic on tori of dimension at most $r+1$, where $r$ is the rank of the Lie group $G$, and the phase space inherits the structure of a torus bundle. If the symmetry group is not compact, the situation is less simple, but still understood: the complete dynamics is either quasi-periodic or diffeomorphic to $\mathbb{R}$, and whether one or the other case is more frequent or generic depends on the symmetry group (see Section 4.2, Appendix B and [2, 33]). System | Symmetry | rank$(S)$ | $\sharp$ horizontal gauge momenta ---|---|---|--- Nonholonomic oscillator | $\mathbb{T}^{2}$ | 1 | 1 Vertical Disk | $\textrm{SE}(2)\times S^{1}$ | 2 | 2 Tippe–top | $\textrm{SE}(2)\times S^{1}$ | 2 | 2 Falling disk | $\textrm{SE}(2)\times S^{1}$ | 2 | 2 Snakeboard | $\textrm{SE}(2)\times S^{1}$ | 2 | 2 Body of revolution | $\textrm{SE}(2)\times S^{1}$ | 2 | 2 Ball in a cylinder | $\textrm{SO}(3)\times S^{1}$ | 2 | 2 Ball in a cup/cap | $\textrm{SO}(3)\times S^{1}$ | 2 | 2 Ball in a surface of revolution | $\textrm{SO}(3)\times S^{1}$ | 2 | 2 Table 1: Nonholonomic systems and related horizontal gauge momenta with respect to the symmetry. Table 1 shows how many classical examples of nonholonomic systems fit into the scheme of Theorem 3.15 and also puts in evidence the relation between the $\textup{rank}(S)$ and the number of horizontal gauge symmetries as stated in the theorem. We study in detail four of these examples: the nonholonomic oscillator, the snakeboard, a solid of revolution rolling on a plane and a heavy homogeneous ball rolling on a surface of revolution. In particular, the last two examples are paradigmatic of a large class of nonholonomic systems with symmetry. In the case of the snakeboard, we find two horizontal gauge momenta, one of which, as far as we know, has not appeared in the literature before. We use this fact to prove the integrability by quadrature of the reduced system and its hamiltonization. Then we investigate what happens in certain examples when the hypotheses of Theorem 3.15 are not satisfied. In these cases, using the intrinsic momentum equation, it is still possible to find horizontal gauge momenta (in some cases, less than $k$ of them). ### 1.3 Outline of the paper The paper is organized as follows: in Section 2 we recall the basic aspects and notations of nonholonomic systems and horizontal gauge momenta. In Section 3 we present an intrinsic formulation of the momentum equation and the main result of the paper, Theorem 3.15. The results of this Section are illustrated with the example of the nonholonomic oscillator. The fundamental consequences of Theorem 3.15, integrability and hamiltonization, are studied in Section 4. Finally, in Section 5 we first apply our techniques and results to three paradigmatic examples outlined in bold in Table 1. Moreover, we also study different cases where the hypotheses of Theorem 3.15 are not satisfied. The paper is complemented by two appendices: App. A recalls basic defintions regarding almost Poisson brackets and gauge tranformations, and App. B presents basic facts about reconstruction theory. Throughout the work, we assume that all objects (functions, manifolds, distributions, etc) are smooth. Moreover, unless stated otherwise, we consider Lie group actions that are free and proper or we confine our analysis in the submanifold where the action is free and proper. Finally, whenever possible, summation over repeated indices is understood. Acknowledgement: P.B. would like to thank University of Padova and Prof. F. Fassò for the kind hospitality during her visit and to CNPq (Brazil) for financial support. N.S. thanks IMPA and Prof. H. Bursztyn, PUC-Rio and Prof. A. Mandini for the kind hospitality during all his visits in Rio de Janeiro. P.B. and N.S. also thank F. Fassò and A. Giacobbe for many interesting and useful discussions on finite dimensional non-Hamiltonian integrable systems and Alejandro Cabrera and Jair Koiller for their insightful comments. ## 2 Initial Setting: Nonholonomic systems and horizontal gauge momenta ### 2.1 Nonholonomic systems with symmetries A nonholonomic system is a mechanical system on a configuration manifold $Q$ with (linear) constraints in the velocities. The permitted velocities are represented by a nonintegrable constant-rank distribution $D$ on $Q$. A nonholonomic system, denoted by the pair $(L,D)$, is given by a manifold $Q$, a lagrangian function $L:TQ\to\mathbb{R}$ of mechanical type, i.e., $L=\kappa-U$ for $\kappa$ and $U$ the kinetic and potential energy respectively, and a nonintegrable distribution $D$ on $Q$. We now write the equations of motion of such systems following [10]. Since the lagrangian $L$ is of mechanical type, the Legendre transformation $Leg:TQ\to T^{*}Q$ defines the submanifold $\mathcal{M}:=Leg(D)$ of $T^{*}Q$. Moreover, since $Leg$ is linear on the fibers, $\tau_{\mbox{\tiny{$\mathcal{M}$}}}:=\tau|_{\mathcal{M}}:\mathcal{M}\to Q$ is also a subbundle of $\tau:T^{*}Q\to Q$, where $\tau$ denotes canonical projection. Then, if $\Omega_{Q}$ denotes the canonical 2-form on $T^{*}Q$ and $H$ the hamiltonian function induced by the lagrangian $L$, we denote by $\Omega_{\mbox{\tiny{$\mathcal{M}$}}}:=\iota^{*}\Omega_{Q}$ and $H_{\mbox{\tiny{$\mathcal{M}$}}}:=\iota^{*}H$ the 2-form and the hamiltonian on $\mathcal{M}$, where $\iota:\mathcal{M}\to T^{*}Q$ is the natural inclusion. We define the (noningrable) distribution $\mathcal{C}$ on $\mathcal{M}$ given, at each $m\in\mathcal{M}$, by $\mathcal{C}_{m}:=\\{v_{m}\in T_{m}\mathcal{M}\ :T\tau_{\mbox{\tiny{$\mathcal{M}$}}}(v_{m})\in D_{q}\mbox{ for }q=\tau_{\mbox{\tiny{$\mathcal{M}$}}}(m)\\}.$ (2.1) The nonholonomic dynamics is then given by the integral curves of the vector field $X_{\mbox{\tiny{nh}}}$ on $\mathcal{M}$, taking values in $\mathcal{C}$ (i.e., $X_{\mbox{\tiny{nh}}}(m)\in\mathcal{C}_{m}$) such that ${\bf i}_{X_{\mbox{\tiny{nh}}}}\Omega_{\mbox{\tiny{$\mathcal{M}$}}}|_{\mathcal{C}}=dH_{\mbox{\tiny{$\mathcal{M}$}}}|_{\mathcal{C}},$ (2.2) where $\Omega_{\mbox{\tiny{$\mathcal{M}$}}}|_{\mathcal{C}}$ and $dH_{\mbox{\tiny{$\mathcal{M}$}}}|_{\mathcal{C}}$ are the point-wise restriction of the forms to $\mathcal{C}$. It is worth noticing that the 2-section $\Omega_{\mbox{\tiny{$\mathcal{M}$}}}|_{\mathcal{C}}$ is nondegenerate and thus we have a well defined vector field $X_{\mbox{\tiny{nh}}}$ satisfying (2.2), called the nonholonomic vector field. On the hamiltonian side we will denote a nonholonomic system by the triple $(\mathcal{M},\Omega_{\mbox{\tiny{$\mathcal{M}$}}}|_{\mathcal{C}},H_{\mbox{\tiny{$\mathcal{M}$}}})$. Symmetries of a nonholonomic system. We say that an action of a Lie group $G$ on $Q$ defines a symmetry of the nonholonomic system $(L,D)$ if it is free and proper and its tangent lift leaves $L$ and $D$ invariant. Let $\mathfrak{g}$ be the Lie algebra associated to the Lie group $G$. At each $q\in Q$, we denote by $V_{q}\subset T_{q}Q$ the tangent space to the $G$-orbit at $q$, that is $V_{q}:=\textup{span}\\{\eta_{Q}(q):\eta\in\mathfrak{g}\\}$, where $\eta_{Q}(q)$ denotes the infinitesimal generator of $\eta$ at $q$. The lift of the $G$-action to the cotangent bundle $T^{*}Q$ leaves also the submanifold $\mathcal{M}\subset T^{*}Q$ invariant, hence there is a well defined $G$-action on $\mathcal{M}$ denoted by $\Psi:G\times\mathcal{M}\to\mathcal{M}$. The hamiltonian function $H_{\mbox{\tiny{$\mathcal{M}$}}}$ and the 2-section $\Omega_{\mbox{\tiny{$\mathcal{M}$}}}|_{\mathcal{C}}$ are $G$-invariant and we say that $(\mathcal{M},\Omega_{\mbox{\tiny{$\mathcal{M}$}}}|_{\mathcal{C}},H_{\mbox{\tiny{$\mathcal{M}$}}})$ is a nonholonomic system with a $G$-symmetry. We denote by $\mathcal{V}_{m}\subset T_{m}\mathcal{M}$ the tangent space to the $G$-orbit at $m\in\mathcal{M}$ (i.e., $\mathcal{V}_{m}=\\{\eta_{\mbox{\tiny{$\mathcal{M}$}}}(m):\eta\in\mathfrak{g}\\}$). ###### Definition 2.1 ([12]). A nonholonomic system $(\mathcal{M},\Omega_{\mbox{\tiny{$\mathcal{M}$}}}|_{\mathcal{C}},H_{\mbox{\tiny{$\mathcal{M}$}}})$ with a $G$-symmetry verifies the dimension assumption if, for each $q\in Q$, $T_{q}Q=D_{q}+V_{q}.$ (2.3) Equivalently, the dimension assumption can be stated as $T_{m}\mathcal{M}=\mathcal{C}_{m}+\mathcal{V}_{m}$ for each $m\in\mathcal{M}$. At each $q\in Q$, we define the distribution $S$ over $Q$ whose fibers are $S_{q}:=D_{q}\cap V_{q}$ and the distribution $\mathfrak{g}_{S}$ over $Q$ with fibers $(\mathfrak{g}_{S})_{q}=\\{\xi^{q}\in\mathfrak{g}\ :\ \xi_{Q}(q)\in S_{q}\\},$ (2.4) where $\xi_{Q}(q):=(\xi^{q})_{Q}(q)$. Due to the dimension assumption (2.3), $\mathfrak{g}_{S}\to Q$ is a vector subbundle of $Q\times\mathfrak{g}\to Q$ and, if the action is free then $\textup{rank}(S)=\textup{rank}(\mathfrak{g}_{S})$ (see [4]). During this article, we denote by $\Gamma(\mathfrak{g}_{S})$ the sections of the bundle $\mathfrak{g}_{S}\to Q$. Reduction by symmetries. If $(\mathcal{M},\Omega_{\mbox{\tiny{$\mathcal{M}$}}}|_{\mathcal{C}},H_{\mbox{\tiny{$\mathcal{M}$}}})$ is a nonholonomic system with a $G$-symmetry, the nonholonomic vector field $X_{\mbox{\tiny{nh}}}$ is $G$-invariant, i.e., $T\Psi_{g}(X_{\mbox{\tiny{nh}}}(m))=X_{\mbox{\tiny{nh}}}(\Psi_{g}(m))$ with $\Psi_{g}:\mathcal{M}\to\mathcal{M}$ the $G$-action on $\mathcal{M}$ and $g\in G$, and hence it can be reduced to the quotient space $\mathcal{M}/G$. More precisely, denoting by $\rho:\mathcal{M}\to\mathcal{M}/G$ the orbit projection, the reduced dynamics on $\mathcal{M}/G$ is described by the integral curves of the vector field $X_{\mbox{\tiny{red}}}:=T\rho(X_{\mbox{\tiny{nh}}}).$ (2.5) Splitting of the tangent bundle. The dimension assumption ensures the existence of a vertical complement $W$ of the constraint distribution $D$ (see [3]), that is, $W$ is a distribution on $Q$ so that $TQ=D\oplus W\qquad\mbox{where}\qquad W\subset V.$ (2.6) A vertical complement $W$ also induces a splitting of the vertical space $V=S\oplus W$. Moreover, there is a one to one correspondence between the choice of an $Ad$-invariant subbundle $\mathfrak{g}_{W}\to Q$ of $\mathfrak{g}\times Q\to Q$ such that, at each $q\in Q$, $(\mathfrak{g}\times Q)_{q}=(\mathfrak{g}_{S})_{q}\oplus(\mathfrak{g}_{W})_{q},$ (2.7) and the choice of a $G$-invariant vertical complement of the constraints $W$. ###### Remark 2.2. If the $G$-action is free, the existence of a $G$-invariant vertical complement $W$ is guaranteed by choosing $W=S^{\perp}\cap V$, where $S^{\perp}$ denotes the orthogonal complement of $S$ with respect to the ($G$-invariant) kinetic energy metric (however $W$ does not have to be chosen in this way). In the case of non-free actions, as anticipated in the Introduction, we restrict our study to the submanifold $\widetilde{Q}$ of $Q$ where the action is free (see Examples 5.2 and 5.3)222If the action is not free, it can be proven that for compact Lie groups $G$ (or the product of a compact Lie group and a vector space), the dimension assumption guarantees that it is always possible to choose a $G$-invariant vertical complement $W$, [4].. $\diamond$ Next, we pull back the decomposition (2.6) to $\mathcal{M}$. From (2.6) and (2.1) we obtain the corresponding decomposition on $T\mathcal{M}$, $T\mathcal{M}=\mathcal{C}\oplus\mathcal{W}\qquad\mbox{with}\qquad\mathcal{W}\subset\mathcal{V},$ (2.8) where, at each $m\in\mathcal{M}$, $\mathcal{W}_{m}=\\{(\xi^{q})_{\mbox{\tiny{$\mathcal{M}$}}}(m)\ :\ \xi^{q}\in(\mathfrak{g}_{\mbox{\tiny{$\mathcal{W}$}}})_{q}\mbox{ for }q=\tau_{\mbox{\tiny{$\mathcal{M}$}}}(m)\\}$. We define the distribution $\mathcal{S}=\mathcal{C}\cap\mathcal{V}$ or equivalently, for each $m\in\mathcal{M}$, $\mathcal{S}_{m}=\\{(\xi^{q})_{\mbox{\tiny{$\mathcal{M}$}}}(m)\ :\ \xi^{q}\in(\mathfrak{g}_{S})_{q}\ \mbox{for }q=\tau_{\mbox{\tiny{$\mathcal{M}$}}}(m)\\}.$ ### 2.2 Horizontal gauge momenta Consider a nonholonomic system $(\mathcal{M},\Omega_{\mbox{\tiny{$\mathcal{M}$}}}|_{\mathcal{C}},H_{\mbox{\tiny{$\mathcal{M}$}}})$ with a $G$-symmetry and recall that $\Theta_{\mbox{\tiny{$\mathcal{M}$}}}$ is the Liouville 1-form restricted to $\mathcal{M}$ (i.e., $\Theta_{\mbox{\tiny{$\mathcal{M}$}}}:=\iota^{*}\Theta_{Q}$). It is well known that for a nonholonomic system, an element $\eta$ of the Lie algebra does not necessarily induce a first integral of the type $\mathcal{J}_{\eta}$ (see [30] for a discussion of this fact). ###### Definition 2.3 ([9, 28]). A function ${\mathcal{J}}\in C^{\infty}(\mathcal{M})$ is a horizontal gauge momentum if there exists $\zeta\in\Gamma(\mathfrak{g}_{S})$ such that ${\mathcal{J}}={\mathcal{J}}_{\zeta}:={\bf i}_{\zeta_{\mathcal{M}}}\Theta_{\mbox{\tiny{$\mathcal{M}$}}}$ and also ${\mathcal{J}}$ is a first integral of the nonholonomic dynamics $X_{\mbox{\tiny{nh}}}$, i.e., $X_{\mbox{\tiny{nh}}}(\mathcal{J})=0$. In this case, the section $\zeta\in\Gamma(\mathfrak{g}_{S})$ is called horizontal gauge symmetry. We are interested in looking for horizontal gauge momenta of a given nonholonomic system with symmetries satisfying the dimension assumption. Looking for a horizontal gauge momentum $\mathcal{J}$ is equivalent to look for the corresponding horizontal gauge symmetry. ###### Remark 2.4. The original definition of horizontal gauge momentum introduced in [9] (and later in [28, 31]) was not exactly as in Definition 2.3 but given in local coordinates. $\diamond$ The nonholonomic momentum map ([12]) $J^{\mbox{\tiny{nh}}}:\mathcal{M}\to\mathfrak{g}_{S}^{*}$ is the bundle map over the identity, given, for each $m\in\mathcal{M}$ and $\xi\in\mathfrak{g}_{S}|_{m}$, by $\langle J^{\mbox{\tiny{nh}}},\xi\rangle(m)={\bf i}_{\xi_{\mathcal{M}}}\Theta_{\mbox{\tiny{$\mathcal{M}$}}}(m).$ (2.9) Hence, a horizontal gauge momentum can also be seen as a function of the type $\langle J^{\mbox{\tiny{nh}}},\zeta\rangle\in C^{\infty}(\mathcal{M})$ that is a first integral of $X_{\mbox{\tiny{nh}}}$. ###### Proposition 2.5. A nonholonomic system $(\mathcal{M},\Omega_{\mbox{\tiny{$\mathcal{M}$}}}|_{\mathcal{C}},H_{\mbox{\tiny{$\mathcal{M}$}}})$ with a $G$-symmetry satisfying the dimension assumption admits, at most, $k=\textup{rank}(S)$ (functionally independent) horizontal gauge momenta. ###### Proof. Consider $\xi_{1},\xi_{2}\in\Gamma(\mathfrak{g}_{S})$. It is easy to see that if $J_{1}={\bf i}_{(\xi_{1})_{\mbox{\tiny{$\mathcal{M}$}}}}\Theta_{\mbox{\tiny{$\mathcal{M}$}}}$ and $J_{2}={\bf i}_{(\xi_{2})_{\mbox{\tiny{$\mathcal{M}$}}}}\Theta_{\mbox{\tiny{$\mathcal{M}$}}}$ are functionally independent functions then $\xi_{1},\xi_{2}$ are linearly independent. ∎ Observe that the existence of a horizontal gauge momentum, implies the existence of a global section on $\mathfrak{g}_{S}\to Q$. Hence, in order to prove that a nonholonomic system admits exactly $k$ horizontal gauge symmetries, we have to assume the triviality of the bundle $\mathfrak{g}_{S}\to Q$, that is, $\mathfrak{g}_{S}\to Q$ admits a global basis of sections that we denote by $\mathfrak{B}_{\mathfrak{g}_{S}}=\\{\xi_{1},...,\xi_{k}\\}.$ (2.10) The basis $\mathfrak{B}_{\mathfrak{g}_{S}}$ induces functions $J_{1},...,J_{k}$ on $\mathcal{M}$ (linear on the fibers) defined by $J_{i}:=\langle J^{\mbox{\tiny{nh}}},\xi_{i}\rangle={\bf i}_{(\xi_{i})_{\mathcal{M}}}\Theta_{\mbox{\tiny{$\mathcal{M}$}}}\qquad\mbox{for }i=1,...,k.$ (2.11) If ${\mathcal{J}}\in C^{\infty}(\mathcal{M})$ is a horizontal gauge momentum with $\zeta$ its associated horizontal gauge symmetry, then $\mathcal{J}$ and $\zeta$ can be written, with respect to the basis (2.10), as ${\mathcal{J}}=f_{i}J_{i}\qquad\mbox{and}\qquad\zeta=f_{i}\xi_{i},\quad\mbox{for}\ f_{i}\in C^{\infty}(Q).$ (2.12) We call the functions $f_{i}$, $i=1,...,k$ the coordinate functions of $\mathcal{J}$ with respect to the basis $\mathfrak{B}_{\mathfrak{g}_{S}}=\\{\xi_{1},...,\xi_{k}\\}$. From now, if not otherwise stated, we assume the following conditions on the symmetry given by the action of the Lie group $G$. ###### Conditions ${\mathcal{A}}$. We say that a nonholonomic system with a $G$-symmetry satisfies Conditions $\mathcal{A}$ if 1. $(\mathcal{A}1)$ the dimension assumption (2.3) is fulfilled; 2. $(\mathcal{A}2)$ the bundle $\mathfrak{g}_{S}\longrightarrow Q$ is trivial; 3. $(\mathcal{A}3)$ the action of $G$ on $Q$ is proper and free. A section $\xi$ of the bundle $Q\times\mathfrak{g}\to Q$ is $G$-invariant if $[\xi,\eta]=0$ for all $\eta\in\mathfrak{g}$. As a consequence of Conditions $\mathcal{A}$ we obtain the following Lemma. ###### Lemma 2.6. Consider a nonholonomic system with a $G$-symmetry satisfying Conditions $\mathcal{A}$, then 1. $(i)$ there exists a global basis $\mathfrak{B}_{\mathfrak{g}_{S}}$ of $\Gamma(\mathfrak{g}_{S})$ given by $G$-invariant sections. 2. $(ii)$ Let $\xi\in\Gamma(\mathfrak{g}_{S})$. The function $J_{\xi}={\bf i}_{\xi_{\mbox{\tiny{$\mathcal{M}$}}}}\Theta_{\mbox{\tiny{$\mathcal{M}$}}}$ is $G$-invariant if and only if $\xi\in\Gamma(\mathfrak{g}_{S})$ is $G$-invariant. 3. $(iii)$ Let $\rho_{{\mbox{\tiny{$Q$}}}}:Q\to Q/G$ be the orbit projection associated to the $G$-action on $Q$. If $X\in\mathfrak{X}(Q)$ is $\rho_{\mbox{\tiny{$Q$}}}$-projectable, then $[X,\xi_{Q}]\in\Gamma(V)$, for $\xi\in\Gamma(Q\times\mathfrak{g}\to Q)$. ###### Proof. Items $(ii)$ and $(iii)$ were already proven in [8, Lemma 3.8]. To prove item $(i)$ observe that items ($\mathcal{A}2$) and ($\mathcal{A}3$) imply that $S$ admits a global basis of $G$-invariant sections $\\{Y_{1},...,Y_{k}\\}$, i.e., $[Y_{i},\nu_{Q}]=0$ for all $\nu\in\mathfrak{g}$. Since the action is free, we conclude that, for $(\xi_{i})_{\mbox{\tiny{$Q$}}}=Y_{i}$ we have that $[\xi_{i},\nu]\in\Gamma(Q\times\mathfrak{g})$ is the zero section and thus $\xi_{i}$ are $G$-invariant. ∎ Under Conditions $\mathcal{A}$, we guarantee the existence of a global $G$-invariant basis $\mathfrak{B}_{\mathfrak{g}_{S}}$ of sections of $\mathfrak{g}_{S}\to Q$ with associated $G$-invariant functions $J_{i}$ (defined as in (2.12)). Hence, $\mathcal{J}$ is a $G$-invariant horizontal gauge momentum if and only if the corresponding coordinate functions $f_{i}$ in (2.12) are $G$-invariant as well. ## 3 A momentum equation ### 3.1 An intrinsic momentum equation In order to achieve our goal of giving a precise estimate of the number of (functionally independent) horizontal gauge momenta of a nonholonomic system, we write a momentum equation. Let $(\mathcal{M},\Omega_{\mbox{\tiny{$\mathcal{M}$}}}|_{\mathcal{C}},H_{\mbox{\tiny{$\mathcal{M}$}}})$ be a nonholonomic system with a $G$-symmetry satisfying Conditions $\mathcal{A}$. First, we consider a decomposition (or a principal connection) $TQ=H\oplus V\qquad\mbox{so that}\qquad H\subset D.$ (3.13) We denote by $A:T\mathcal{M}\to\mathfrak{g}$ the connection 1-form such that $\textup{Ker}A=H$. Since the vertical space $V$ is also decomposed as $V=S\oplus W$, the connection $A$ can be written as $A=A_{S}+A_{W}$, where, for each $X\in TQ$, $A_{W}:TQ\to\mathfrak{g}$ is given by $A_{W}(X)=\eta\qquad\mbox{if and only if}\qquad\eta_{\mbox{\tiny{$Q$}}}=P_{W}(X),$ and $A_{S}:TQ\to\mathfrak{g}$ is given by $A_{S}(X)=\xi\qquad\mbox{if and only if}\qquad\xi_{\mbox{\tiny{$Q$}}}=P_{S}(X),$ (3.14) where $P_{W}:TQ\to W$ and $P_{S}:TQ\to S$ are the corresponding projections associated to decomposition $TQ=H\oplus S\oplus W.$ (3.15) Second, we see that each map $A_{S}$ and $A_{W}$ defines a corresponding 2-form on $Q$ in the following way (see [3]): on the one hand, the $W$-curvature on $Q$ is a $\mathfrak{g}$-valued 2-form defined, for each $X,Y\in TQ$, as $K_{W}(X,Y)=d^{D}A_{W}(X,Y)=dA_{W}(P_{D}(X),P_{D}(Y))=-A_{W}([P_{D}(X),P_{D}(Y)]),$ with $P_{D}:TQ=D\oplus W\to D$ the projection to the first factor. On the other hand, after the choice of a global basis $\mathfrak{B}_{\mathfrak{g}_{S}}=\\{\xi_{1},...,\xi_{k}\\}$ of $\mathfrak{g}_{S}\to Q$, the $\mathfrak{g}$-valued 1-form $A_{S}$ on $\mathcal{M}$ can be written as $A_{S}={Y}^{i}\otimes\xi_{i},$ where ${Y}^{i}$ are 1-forms on $Q$ such that ${Y}^{i}|_{H}={Y}^{i}|_{W}=0$ and ${Y}^{i}((\xi_{j})_{\mbox{\tiny{$Q$}}})=\delta_{ij}$ for all $i=1,...,k$ (recall that the sum over repeated indexes is understood). Then the corresponding $\mathfrak{g}$-valued 2-form is given, for each $X,Y\in TQ$, by $(d^{D}Y^{i})\otimes\xi^{i}(X,Y)=dY^{i}(P_{D}(X),P_{D}(Y))\otimes\xi^{i}.$ Recalling that $\tau_{\mbox{\tiny{$\mathcal{M}$}}}:\mathcal{M}\to Q$ is the canonical projection, we define the $\mathfrak{g}$-valued 2-forms $\bar{\sigma}_{\mathfrak{g}_{S}}$ and $\sigma_{\mathfrak{g}_{S}}$ on $Q$ and $\mathcal{M}$ respectively, by $\begin{split}\bar{\sigma}_{\mathfrak{g}_{S}}&:=K_{W}+d^{D}Y^{i}\otimes\xi_{i},\\\ \sigma_{\mathfrak{g}_{S}}&:=\tau_{\mbox{\tiny{$\mathcal{M}$}}}^{*}\bar{\sigma}_{\mathfrak{g}_{S}}\,.\end{split}$ (3.16) Equivalently, $\sigma_{\mathfrak{g}_{S}}$ is given by $\sigma_{\mathfrak{g}_{S}}=\mathcal{K}_{\mbox{\tiny{$\mathcal{W}$}}}+d^{\mathcal{C}}{\mathcal{Y}}^{i}\otimes\xi_{i}$, where $\mathcal{K}_{\mbox{\tiny{$\mathcal{W}$}}}=\tau_{\mbox{\tiny{$\mathcal{M}$}}}^{*}K_{W}$, $\mathcal{Y}^{i}=\tau_{\mbox{\tiny{$\mathcal{M}$}}}^{*}Y^{i}$ and $d^{\mathcal{C}}\mathcal{Y}^{i}({\mathcal{X}},\mathcal{Y})=d\mathcal{Y}^{i}(P_{\mathcal{C}}(\mathcal{X}),P_{\mathcal{C}}(\mathcal{Y}))$ for $\mathcal{X},\mathcal{Y}\in T\mathcal{M}$, and $P_{\mathcal{C}}:T\mathcal{M}\to\mathcal{C}$ the projection associated to decomposition (2.8). ###### Definition 3.1. Consider a nonholonomic system $(\mathcal{M},\Omega_{\mbox{\tiny{$\mathcal{M}$}}}|_{\mathcal{C}},H_{\mbox{\tiny{$\mathcal{M}$}}})$ with a $G$-symmetry satisfying Conditions $\mathcal{A}$ and denote by $\mathfrak{B}_{\mathfrak{g}_{S}}=\\{\xi_{1},...,\xi_{k}\\}$ a global basis of $\Gamma(\mathfrak{g}_{S})$. The 2-form $\langle J,\sigma_{\mathfrak{g}_{S}}\rangle$ on $\mathcal{M}$ is defined by $\begin{split}\langle J,\sigma_{\mathfrak{g}_{S}}\rangle:=&\ \langle J,\mathcal{K}_{\mbox{\tiny{$\mathcal{W}$}}}\rangle+\langle J,d^{\mathcal{C}}\mathcal{Y}^{i}\otimes\xi^{i}\rangle,\\\ :=&\ \langle J,\mathcal{K}_{\mbox{\tiny{$\mathcal{W}$}}}\rangle+J_{i}\,d^{\mathcal{C}}\mathcal{Y}^{i},\end{split}$ where $J:\mathcal{M}\to\mathfrak{g}^{*}$ is the canonical momentum map restricted to $\mathcal{M}$ and $\langle\cdot,\cdot\rangle$ denotes the pairing between $\mathfrak{g}^{*}$ and $\mathfrak{g}$. The 2-form $\langle J,\sigma_{\mathfrak{g}_{S}}\rangle$ already appeared in [8] for a specific choice of the basis $\mathfrak{B}_{\mathfrak{g}_{S}}$ (see Sec. 4.1). ###### Lemma 3.2. Assume that Conditions $\mathcal{A}$ are satisfied, then 1. $(i)$ The $\mathfrak{g}$-valued 2-forms $\bar{\sigma}_{\mathfrak{g}_{S}}$ and $\sigma_{\mathfrak{g}_{S}}$ depend on the chosen basis $\mathfrak{B}_{\mathfrak{g}_{S}}$. 2. $(ii)$ If the basis $\mathfrak{B}_{\mathfrak{g}_{S}}$ is $G$-invariant, then the 2-form $\langle J,\sigma_{\mathfrak{g}_{S}}\rangle$ is $G$-invariant as well. ###### Proof. It is straightforward to see that the $\mathfrak{g}$-valued 2-forms $\bar{\sigma}_{\mathfrak{g}_{S}}$ and $\sigma_{\mathfrak{g}_{S}}$ depend directly on the chosen basis $\mathfrak{B}_{\mathfrak{g}_{S}}$. Item $(ii)$ is proven in [8, Lemma 3.8]. ∎ ###### Proposition 3.3. (Momentum equation) Let us consider a nonholonomic system $(\mathcal{M},\Omega_{\mbox{\tiny{$\mathcal{M}$}}}|_{\mathcal{C}},H_{\mbox{\tiny{$\mathcal{M}$}}})$ with a $G$-symmetry satisfying Conditions $\mathcal{A}$, and let $\mathfrak{B}_{\mathfrak{g}_{S}}=\\{\xi_{1},...\xi_{k}\\}$ be a (global) basis of $\Gamma(\mathfrak{g}_{S})$ with associated momenta $J_{1},...,J_{k}$ as in (2.11). The function ${\mathcal{J}}=f_{i}J_{i}$, for $f_{i}\in C^{\infty}(Q)$, is a horizontal gauge momentum if and only if the coordinate functions $f_{i}$ satisfy the momentum equation $f_{i}\langle J,\sigma_{\mathfrak{g}_{S}}\rangle({\mathcal{Y}}_{i},X_{\emph{{\mbox{\tiny{nh}}}}})+J_{i}X_{\emph{{\mbox{\tiny{nh}}}}}(f_{i})=0,$ (3.17) where ${\mathcal{Y}}_{i}:=(\xi_{i})_{\mbox{\tiny{$\mathcal{M}$}}}$. ###### Proof. First, from Lemma 2.6 observe that if $\mathcal{X}$ is a vector field on $\mathcal{M}$ that is $T\rho$-projectable, then $[\mathcal{Y}_{i},\mathcal{X}]\in\Gamma(\mathcal{V})$ for $i=1,...,k$. Thus, using (3.16), $\begin{split}\sigma_{\mathfrak{g}_{S}}(\mathcal{Y}_{i},\mathcal{X})&=[d^{\mathcal{C}}\tau_{\mbox{\tiny{$\mathcal{M}$}}}^{*}A_{W}+d^{\mathcal{C}}\tau_{\mbox{\tiny{$\mathcal{M}$}}}^{*}Y^{j}\otimes\xi_{j}](\mathcal{Y}_{i},\mathcal{X})=-\tau_{\mbox{\tiny{$\mathcal{M}$}}}^{*}A_{W}([\mathcal{Y}_{i},\mathcal{X}])-\tau_{\mbox{\tiny{$\mathcal{M}$}}}^{*}Y^{j}([\mathcal{Y}_{i},\mathcal{X}])\otimes\xi_{j}\\\ &=-\tau_{\mbox{\tiny{$\mathcal{M}$}}}^{*}A([\mathcal{Y}_{i},\mathcal{X}]).\end{split}$ Second, by the definition of the canonical momentum map $J:\mathcal{M}\to\mathfrak{g}^{*}$, we get that $\langle J,\sigma_{\mathfrak{g}_{S}}\rangle(\mathcal{Y}_{i},\mathcal{X})=-\langle J,\tau_{\mbox{\tiny{$\mathcal{M}$}}}^{*}A([\mathcal{Y}_{i},\mathcal{X}])\rangle=-{\bf i}_{[\mathcal{Y}_{i},\mathcal{X}]}\Theta_{\mbox{\tiny{$\mathcal{M}$}}}.$ Then, recalling that $\Omega_{\mbox{\tiny{$\mathcal{M}$}}}=-d\Theta_{\mbox{\tiny{$\mathcal{M}$}}}$ and using that $\Theta_{\mbox{\tiny{$\mathcal{M}$}}}(\mathcal{X})$ is an invariant function, we observe that $\begin{split}(\Omega_{\mbox{\tiny{$\mathcal{M}$}}}+\langle J,\sigma_{\mathfrak{g}_{S}}\rangle)(\mathcal{Y}_{i},\mathcal{X})&=-{\mathcal{Y}}_{i}(\Theta_{\mbox{\tiny{$\mathcal{M}$}}}(\mathcal{X}))+\mathcal{X}(J_{i})+\Theta_{\mbox{\tiny{$\mathcal{M}$}}}([\mathcal{Y}_{i},\mathcal{X}])-{\bf i}_{[\mathcal{Y}_{i},\mathcal{X}]}\Theta_{\mbox{\tiny{$\mathcal{M}$}}}=dJ_{i}(\mathcal{X}).\end{split}$ Now, $\mathcal{J}=f_{i}J_{i}$ is a first integral of $X_{\mbox{\tiny{nh}}}$ if and only if $0=d\mathcal{J}(X_{\mbox{\tiny{nh}}})=f_{i}dJ_{i}(X_{\mbox{\tiny{nh}}})+J_{i}X_{\mbox{\tiny{nh}}}(f_{i})$ which is equivalent, for $X_{\mbox{\tiny{nh}}}=\mathcal{X}$, to $0=f_{i}(\Omega_{\mbox{\tiny{$\mathcal{M}$}}}+\langle J,\sigma_{\mathfrak{g}_{S}}\rangle)(\mathcal{Y}_{i},X_{\mbox{\tiny{nh}}})+J_{i}X_{\mbox{\tiny{nh}}}(f_{i})=-f_{i}dH_{\mbox{\tiny{$\mathcal{M}$}}}(\mathcal{Y}_{i})+f_{i}\langle J,\sigma_{\mathfrak{g}_{S}}\rangle(\mathcal{Y}_{i},X_{\mbox{\tiny{nh}}})+J_{i}X_{\mbox{\tiny{nh}}}(f_{i}).$ Using the $G$-invariance of the hamiltonian function $H_{\mbox{\tiny{$\mathcal{M}$}}}$ we get (3.17). ∎ ###### Remark 3.4. From the proof of Proposition 3.3, we observe that the momentum equation can be equivalently written as $0=f_{i}\Theta_{\mbox{\tiny{$\mathcal{M}$}}}([(\xi_{i})_{\mbox{\tiny{$\mathcal{M}$}}},X_{{\mbox{\tiny{nh}}}}])-J_{i}X_{{\mbox{\tiny{nh}}}}(f_{i})$. $\diamond$ In the light of Proposition 3.3 (or more precisely Remark 3.4), we recover the well-known result that horizontal symmetries generate first integrals [10, 12]. Recall that a horizontal symmetry is an element $\eta\in\mathfrak{g}$ such that $\eta_{Q}\in\Gamma(D)$ (see e.g. [11]). ###### Corollary 3.5 (Horizontal symmetries). Let $(\mathcal{M},\Omega_{\mbox{\tiny{$\mathcal{M}$}}}|_{\mathcal{C}},H_{\mbox{\tiny{$\mathcal{M}$}}})$ be a nonholonomic system with a $G$-symmetry satisfying Conditions $\mathcal{A}$. If the bundle $\mathfrak{g}_{S}\to Q$ admits a horizontal symmetry $\eta$, then the function $\langle J,\eta\rangle$ is a horizontal gauge momentum for the nonholonomic system. Hence if there is global basis of horizontal symmetries of $\mathfrak{g}_{S}$, then the nonholonomic system admits $k=\textup{rank}\,(\mathfrak{g}_{S})$ horizontal gauge momenta. ###### Proof. If $\eta_{1}$ is a horizontal symmetry, then let $\mathfrak{B}_{\mathfrak{g}_{S}}=\\{\eta_{1},\xi_{2},...,\xi_{k}\\}$ a basis of $\Gamma(\mathfrak{g}_{S})$. A section $\zeta=f_{1}\eta_{1}+f_{i}\xi_{i}$ is a horizontal gauge symmetry if $J_{1}X_{\mbox{\tiny{nh}}}(f_{1})+f_{i}\Theta_{\mbox{\tiny{$\mathcal{M}$}}}([X_{{\mbox{\tiny{nh}}}},(\xi_{i})_{\mbox{\tiny{$\mathcal{M}$}}}])+J_{i}X_{{\mbox{\tiny{nh}}}}(f_{i})=0$, since $[X_{\mbox{\tiny{nh}}},\eta_{1}]=0$. Then we see that $f_{1}=1$ and $f_{i}=0$ for $i=2,...,k$ is a solution of the momentum equation and hence $\eta_{1}$ is a horizontal gauge symmetry. As a consequence, if the bundle $\mathfrak{g}_{S}\to Q$ admits a basis of horizontal symmetries, then the nonholonomic admits $k$ horizontal gauge momenta. ∎ A set of solutions $(f_{1},...,f_{k})$ of the momentum equation (3.17) may depend on $\mathcal{M}$ and not only on $Q$. Based on the fact that the equation (3.17) is quadratic in the fibers, we show next that it is equivalent to a system of partial differential equations for the functions $f_{i}$ on the manifold $Q$. ### 3.2 The “strong invariance” condition on the kinetic energy We now introduce and study an invariance property, called strong invariance, that involves the kinetic energy, the constraints and the G-symmetry. This condition is crucial to state our main result in Theorem 3.15. ###### Definition 3.6. Consider a Riemannian metric $\kappa$ on a manifold $Q$ and a distribution $S\subset TQ$ on $Q$. The metric $\kappa$ is called strong invariant on $S$ (or $S$-strong invariant) if for all $G$-invariant sections $Y_{1},Y_{2},Y_{3}\in\Gamma(S)$, holds that $\kappa(Y_{1},[Y_{2},Y_{3}])=-\kappa(Y_{3},[Y_{2},Y_{1}]).$ First we observe that, for a Riemannian metric $\kappa$, being $G$-invariant is weaker than being strong invariant on the whole tangent bundle as the following example shows: ###### Example 3.7. The case $Q=G$ with a strong invariant metric on $TG$. Consider a Lie group $G$ acting on itself with the left action and let $\kappa_{G}$ be a Riemannian metric on it. In this case, the metric being $G$-invariant is equivalent to being left invariant, while being strong invariant on $TG$ is equivalent to being bi-invariant. In fact, if the metric is strong invariant on $TG$ then $\kappa_{G}([Y_{i},Y_{j}],Y_{l})=-\kappa_{G}(Y_{j},[Y_{i},Y_{l}])$ for all $Y_{i}\in\mathfrak{X}(G)$ such that $[Y_{i},\eta^{R}]=0$ for all $\eta\in\mathfrak{g}$ and $\eta^{R}$ the corresponding right-invariant vector field on $G$ (we are using that the infinitesimal generator associated to the left action is the corresponding right invariant vector field on $G$). Then, the inner product $\langle\cdot,\cdot\rangle$ on $\mathfrak{g}$ defined by $\langle\eta_{1},\eta_{2}\rangle=\kappa_{G}(\eta_{1}^{L},\eta_{2}^{L})(e),\qquad\mbox{for }\eta_{i}\in\mathfrak{g},$ is $ad$-invariant and hence the metric $\kappa_{G}$ turns out to be bi- invariant on $G$. ###### Example 3.8. A nonholonomic system with a strong invariant kinetic energy on the vertical distribution $V$. Consider a nonholonomic system $(\mathcal{M},\Omega_{\mbox{\tiny{$\mathcal{M}$}}}|_{\mathcal{C}},H_{\mbox{\tiny{$\mathcal{M}$}}})$ with a $G$-symmetry. If the kinetic energy metric $\kappa$ is strong invariant on $V$ then it induces a bi-invariant metric on the Lie group $G$. This case only may occur when the group of symmetries $G$ is compact or a product of a compact Lie group with a vector space. In order to prove this, we first observe that ###### Lemma 3.9. The kinetic energy metric satisfies $\kappa([Y_{i},Y_{j}],Y_{l})=-\kappa(Y_{j},[Y_{i},Y_{l}])$ for all $Y_{i}\in\Gamma(V)$ $G$-invariant if and only if $\kappa([(\eta_{a})_{Q},(\eta_{b})_{Q}],(\eta_{c})_{Q})=-\kappa((\eta_{b})_{Q},[(\eta_{a})_{Q},(\eta_{c})_{Q}])$ for all $\eta_{i}\in\mathfrak{g}$. ###### Proof. The vertical distribution $V$ admits a basis of $G$-invariant sections $\\{Y_{1},...,Y_{n}\\}$. For $\eta\in\mathfrak{g}$, there are functions $g^{j}\in C^{\infty}(Q)$, $j=1,...,n$ so that $\eta_{Q}=g^{j}Y_{j}$ and hence $0=[Y_{i},\eta_{Q}]=g^{j}[Y_{i},Y_{j}]+Y_{i}(g^{j})Y_{j}$. Then we obtain that $\begin{split}\kappa([(\eta_{a})_{Q},(\eta_{b})_{Q}],(\eta_{c})_{Q})&=g_{a}^{i}g_{b}^{j}g_{c}^{l}\kappa([Y_{i},Y_{j}],Y_{l})+g_{a}^{i}g_{c}^{l}\kappa(Y_{i}(g_{b}^{j})Y_{j},Y_{l})-g_{b}^{j}g_{c}^{l}\kappa(Y_{j}(g_{a}^{i})Y_{i},Y_{l})\\\ &=-g_{a}^{i}g_{b}^{j}g_{c}^{l}\kappa([Y_{i},Y_{j}],Y_{l}).\end{split}$ Conversely, we write $Y_{i}=g_{i}^{a}\eta_{a}$ and we repeat the computation. ∎ As a direct consequence of Lemma 3.9, if the kinetic energy is strong invariant on $V$, then $\kappa([(\eta_{a})_{Q},(\eta_{b})_{Q}],(\eta_{c})_{Q})=-\kappa((\eta_{b})_{Q},[(\eta_{a})_{Q},(\eta_{c})_{Q}])$ for all $\eta_{i}\in\mathfrak{g}$. Hence, for each $q\in Q$, there is an $ad$-invariant inner product on $\mathfrak{g}$ defined, at each $\eta_{1},\eta_{2}\in\mathfrak{g}$ by $\langle\eta_{1},\eta_{2}\rangle_{q}=\kappa((\eta_{1})_{Q}(q),(\eta_{2})_{Q}(q)).$ Therefore, there exists a family of bi-invariant metrics $\kappa_{G}^{q}$ on $G$ defined by $\kappa_{G}^{q}(\eta_{1}^{L}(g),\eta_{2}^{L}(g))=\langle\eta_{1},\eta_{2}\rangle_{q}$. ###### Example 3.10. The symmetry group $G$ is abelian. Consider a nonholonomic system $(\mathcal{M},\Omega_{\mbox{\tiny{$\mathcal{M}$}}}|_{\mathcal{C}},H_{\mbox{\tiny{$\mathcal{M}$}}})$ with a $G$-symmetry, and let $G$ be an abelian Lie group, then the Lie algebra $\mathfrak{g}$ is also abelian and the kinetic energy metric satisfies $\kappa([(\eta_{1})_{Q},(\eta_{2})_{Q}],(\eta_{3})_{Q})=0$ for all $\eta_{i}\in\mathfrak{g}$. Following Example 3.8, we have also that $\kappa([Y_{1},Y_{2}],Y_{3})=0$ for all $G$-invariant sections $Y_{i}$ on $V$ and hence the kinetic energy is trivially strong invariant on $V$. ###### Example 3.11. Horizontal symmetries. Consider a nonholonomic system $(\mathcal{M},\Omega_{\mbox{\tiny{$\mathcal{M}$}}}|_{\mathcal{C}},H_{\mbox{\tiny{$\mathcal{M}$}}})$ with a $G$-symmetry satisfying Conditions $\mathcal{A}$ and with the bundle $\mathfrak{g}_{S}\to Q$ admitting a global basis of $G$-invariant horizontal symmetries $\\{\eta_{1},...,\eta_{k}\\}$ of the bundle $\mathfrak{g}\times Q\to Q$. Then the vector space generated by the constant sections $\eta_{i}$ is an abelian subalgebra $\mathfrak{s}$ of $\mathfrak{g}$ and the kinetic energy metric is strong invariant on $S$. ### 3.3 Determining the horizontal gauge momenta (in global coordinates) Consider a nonholonomic system $(\mathcal{M},\Omega_{\mbox{\tiny{$\mathcal{M}$}}}|_{\mathcal{C}},H_{\mbox{\tiny{$\mathcal{M}$}}})$ with a $G$-symmetry satisfying Conditions $\mathcal{A}$. From now on, we will also assume that the $G$-symmetry verifies that the manifold $Q/G$ has dimension 1 or equivalently the rank of any horizontal space $H$ defined as in (3.13) is 1. That is, we add a fourth assumption to Conditions $\mathcal{A}$ ###### Condition $({\mathcal{A}}4)$. The $G$-symmetry satisfies that the manifold $Q/G$ has dimension 1. Now, let us consider the horizontal distribution $H$ defined in (3.13). ###### Definition 3.12. We say that $H$ is $S$-orthogonal if it is given by $H:=S^{\perp}\cap D,$ where the orthogonal space to $S$ is taken with respect to the kinetic energy metric. The $S$-orthogonality of $H$ implies that $H$ is a $G$-invariant distribution while Condition $(\mathcal{A}4)$ guarantees that it is trivial and thus it admits a ($G$-invariant) global generator. Now, let $(\mathcal{M},\Omega_{\mbox{\tiny{$\mathcal{M}$}}}|_{\mathcal{C}},H_{\mbox{\tiny{$\mathcal{M}$}}})$ be a nonholonomic system with a $G$-symmetry satisfying Conditions $(\mathcal{A}1)$-$(\mathcal{A}4)$ (that is, the $G$-symmetry satisfies Conditions $\mathcal{A}$ and Condition $(\mathcal{A}4)$). Then there is a global $G$-invariant basis $\mathfrak{B}_{\mathfrak{g}_{S}}=\\{\xi_{1},...,\xi_{k}\\}$ of sections of $\mathfrak{g}_{S}$ and, as usual, we denote $Y_{i}:=(\xi_{i})_{Q}$ the corresponding sections on $S$. If we denote by $\rho_{Q}:Q\to Q/G$ the orbit projection and assuming that the horizontal space $H$ is $S$-orthogonal, then there exists a globally defined section $X_{0}$ generating the horizontal bundle $H\to Q$ that is $\rho_{Q}$-projectable. Hence $\\{X_{0},Y_{1},...,Y_{k}\\}$ defines a global basis of $D=H\oplus S$. Following splitting (3.15), we also consider a (possible non global) basis $\\{Z_{1},...,Z_{N}\\}$ of the vertical complement $W$ and we denote by $(v^{0},v^{1},...,v^{k},w^{1},...,w^{N})$ the coordinates on $TQ$ associated to the basis $\mathfrak{B}_{TQ}=\\{X_{0},Y_{1},...,Y_{k},Z_{1},...Z_{N}\\},$ (3.18) (for short we write the coordinates $(v^{0},v^{i},w^{a})$ associated to the basis $\mathfrak{B}_{TQ}=\\{X_{0},Y_{j},Z_{a}\\}$). If $\mathfrak{B}_{T^{*}Q}=\\{X^{0},Y^{i},Z^{a}\\}$ is the basis of $T^{*}Q$ dual to $\mathfrak{B}_{TQ}$, we denote by $(p_{0},p_{i},p_{a})$ the induced coordinates on $T^{*}Q$. Then the constraint submanifold $\mathcal{M}$ is described as $\mathcal{M}=\\{(q,p_{0},p_{i},p_{a})\in T^{*}Q\ :\ p_{a}=\kappa_{a{\mbox{\tiny{$0$}}}}v^{0}+\kappa_{aj}v^{j}\\},$ where $p_{0}=\kappa_{\mbox{\tiny{$00$}}}v^{0}+\kappa_{{\mbox{\tiny{$0$}}}i}v^{i}$ and $p_{i}=\kappa_{i{\mbox{\tiny{$0$}}}}v^{0}+\kappa_{ij}v^{j}$ with $\kappa_{\mbox{\tiny{$AB$}}}=\kappa(X_{A},X_{B})$ for $X_{A},X_{B}\in\mathfrak{B}_{TQ}$ (i.e., $A,B\in\\{0,i,a\\}$). We now define the dual basis $\mathfrak{B}_{T^{*}\\!\mathcal{M}}=\\{\mathcal{X}^{0},\mathcal{Y}^{i},\mathcal{Z}^{a},dp_{0},dp_{i}\\}\qquad\mbox{and}\qquad\mathfrak{B}_{T\mathcal{M}}=\\{\mathcal{X}_{0},\mathcal{Y}_{i},\mathcal{Z}_{a},\partial_{p_{0}},\partial_{p_{i}}\\}$ (3.19) of $T^{*}\mathcal{M}$ and $T\mathcal{M}$ respectively, where $\mathcal{X}^{0}=\tau_{\mbox{\tiny{$\mathcal{M}$}}}^{*}X^{0}$, $\mathcal{Y}^{i}=\tau_{\mbox{\tiny{$\mathcal{M}$}}}^{*}Y^{i}$, $\mathcal{Z}^{a}=\tau_{\mbox{\tiny{$\mathcal{M}$}}}^{*}Z^{a}$. Observe that, by the $G$-invariance of $p_{0}$ and $p_{i}$, $\mathcal{Y}_{i}=(\xi_{i})_{\mbox{\tiny{$\mathcal{M}$}}}$ and, moreover, by (2.11) $J_{i}={\bf i}_{\mathcal{Y}_{i}}\Theta_{\mbox{\tiny{$\mathcal{M}$}}}=p_{i}.$ We now write the momentum equation (3.17) in (global) coordinates, defined by the basis $\mathfrak{B}_{TQ}$ in (3.18). ###### Lemma 3.13. Suppose that the $G$-symmetry satisfies Conditions $(\mathcal{A}1)$-$(\mathcal{A}4)$ and the horizontal distribution $H$ in (3.13) is $S$-orthogonal. In coordinates associated to the basis (3.18), a function ${\mathcal{J}}\in C^{\infty}(\mathcal{M})$ of the form ${\mathcal{J}}=f_{i}J_{i}$ is a $G$-invariant horizontal gauge momentum of the nonholonomic system $(\mathcal{M},\Omega_{\mbox{\tiny{$\mathcal{M}$}}}|_{\mathcal{C}},H_{\mbox{\tiny{$\mathcal{M}$}}})$ if and only if the coordinate functions $f_{i}\in C^{\infty}(Q)^{G}$ satisfy $v^{l}v^{j}\left(\,f_{i}\kappa(Y_{j},[Y_{i},Y_{l}])\,\right)+(v^{0})^{2}\left(\,f_{i}\kappa(X_{0},[Y_{i},X_{0}])\,\right)+v^{0}v^{j}P_{0j}=0,$ (3.20) where $P_{0j}:=f_{i}(\kappa(Y_{j},[Y_{i},X_{0}])+\kappa(X_{0},[Y_{i},Y_{j}]))-\kappa_{ij}X_{0}(f_{i})$. ###### Proof. We will show that (3.20) is the coordinate version of the momentum equation (3.17). First, observe that the 2-form $\langle J,\sigma_{\mathfrak{g}_{S}}\rangle$ is semi-basic with respect to the bundle $\tau_{\mbox{\tiny{$\mathcal{M}$}}}:\mathcal{M}\to Q$. Let us denote by $\mathcal{X}_{1}$, $\mathcal{X}_{2}$ any element in the subset $\\{{\mathcal{X}}_{0},{\mathcal{Y}}_{1},...,{\mathcal{Y}}_{k}\\}$ of the basis $\mathfrak{B}_{T\mathcal{M}}$ in (3.19), and by $X_{1}:=T\tau_{\mbox{\tiny{$\mathcal{M}$}}}(\mathcal{X}_{1})$ and $X_{2}:=T\tau_{\mbox{\tiny{$\mathcal{M}$}}}(\mathcal{X}_{2})$ the corresponding elements in the basis of $\mathfrak{B}_{TQ}$. Then we have $\displaystyle\langle J,\mathcal{K}_{\mbox{\tiny{$\mathcal{W}$}}}\rangle(\mathcal{X}_{1},\mathcal{X}_{2})$ $\displaystyle=$ $\displaystyle p_{a}\,dZ^{a}(X_{1},X_{2})=-p_{a}Z^{a}([X_{1},X_{2}])=-(\kappa_{{\mbox{\tiny{$0$}}}a}v^{0}+\kappa_{ja}v^{j})Z^{a}([X_{1},X_{2}]),$ $\displaystyle\langle J,d^{\mathcal{C}}{\mathcal{Y}}^{i}\otimes\xi_{i}\rangle(\mathcal{X}_{1},\mathcal{X}_{2})$ $\displaystyle=$ $\displaystyle p_{i}d{\mathcal{Y}}^{i}(\mathcal{X}_{1},\mathcal{X}_{2})=-p_{i}Y^{i}([X_{1},X_{2}])=-(\kappa_{{\mbox{\tiny{$0$}}}i}v^{0}+\kappa_{ij}v^{j})Y^{i}([X_{1},X_{2}]),$ $\displaystyle=$ $\displaystyle-\kappa_{ij}v^{j}Y^{i}([X_{1},X_{2}]),$ since $\kappa_{{\mbox{\tiny{$0$}}}i}=0$ by the $S$-orthogonality of $H$. Using that $[X_{1},X_{2}]\in\Gamma(V)$ (observe that $[Y_{i},Y_{j}]\in\Gamma(V)$ since $V$ is integrable, and $[X_{0},Y_{i}]\in\Gamma(V)$ since $X_{0}$ is $\rho_{\mbox{\tiny{$Q$}}}$-projectable, see Lemma 2.6) then $[X_{1},X_{2}]=Z^{a}([X_{1},X_{2}])Z_{a}+Y^{j}([X_{1},X_{2}])Y_{j}$ and thus $\langle J,\sigma_{\mathfrak{g}_{S}}\rangle(\mathcal{X}_{1},\mathcal{X}_{2})=-v^{0}\kappa(X_{0},[X_{1},X_{2}])-v^{j}\kappa(Y_{j},[X_{1},X_{2}]).$ Second, using that $T\tau_{\mbox{\tiny{$\mathcal{M}$}}}(X_{\mbox{\tiny{nh}}}(q,p))=v^{0}X_{0}+v^{i}Y_{i}$ (recall that $X_{\mbox{\tiny{nh}}}$ is a second order equation) and also recalling that the functions $f_{i}$ are $G$-invariant on $Q$, we obtain that the momentum equation in Proposition 3.3 is written as $0=f_{i}v^{0}\langle J,\sigma_{\mathfrak{g}_{S}}\rangle(\mathcal{Y}_{i},\mathcal{X}_{0})+f_{i}v^{j}\langle J,\sigma_{\mathfrak{g}_{S}}\rangle(\mathcal{Y}_{i},\mathcal{Y}_{j})+p_{i}v^{0}X_{0}(f_{i}).$ Putting together the last two equations we obtain (3.20). ∎ ###### Remark 3.14. If the horizontal distribution $H$ is not chosen to be $S$-orthogonal, then the momentum equation (3.20) is modified in one of the terms: $v^{l}v^{j}\left(\,f_{i}\kappa(Y_{j},[Y_{i},Y_{l}])\,\right)+(v^{0})^{2}\left(\,f_{i}\kappa(X_{0},[Y_{i},X_{0}])-\kappa_{0i}X_{0}(f_{i})\,\right)+v^{0}v^{j}P_{0j}=0.$ In order to obtain the simplest form of the coordinate version of the momentum equation, we require the orthogonality condition between $H$ and $S$. $\diamond$ As a consequence of Lemma 3.13, we can state the main result of the paper. ###### Theorem 3.15. Consider a nonholonomic system $(\mathcal{M},\Omega_{\mbox{\tiny{$\mathcal{M}$}}}|_{\mathcal{C}},H_{\mbox{\tiny{$\mathcal{M}$}}})$ with a $G$-symmetry satisfying Conditions $(\mathcal{A}1)$-$(\mathcal{A}4)$ and with a $S$-orthogonal horizontal space $H$. Moreover assume that the kinetic energy metric is strong invariant on $S$ and that $\kappa(X_{0},[Y,X_{0}])=0$ for $X_{0}$ a $\rho$-projectable vector field on $Q$ taking values in $H$ and for all $Y\in\Gamma(S)$. Then 1. $(i)$ the system admits $k=\textup{rank}(S)$ $G$-invariant (functionally independent) horizontal gauge momenta. Moreover, let us consider a $G$-invariant basis $\mathfrak{B}_{\mathfrak{g}_{S}}=\\{\xi_{1},...,\xi_{k}\\}$ of $\mathfrak{g}_{S}$, with $Y_{i}=(\xi_{i})_{Q}$, and define the $G$-invariant functions $R_{ij}$ on $Q$ given by $R_{ij}=\kappa^{il}[\kappa(Y_{l},[Y_{j},X_{0}])+\kappa(X_{0},[Y_{j},Y_{l}])],$ (3.21) where $\kappa^{il}$ are the elements of the matrix $[\kappa|_{S}]^{-1}$ and $[\kappa|_{S}]$ is the matrix given by the elements $\kappa_{il}$. If $\bar{X}_{0}$ is a globally defined vector field on $Q/G$ such that $T\rho_{\mbox{\tiny{$Q$}}}(X_{0})=\bar{X}_{0}$, then 1. $(ii)$ the $k$ solutions $f^{l}=(\bar{f}^{l}_{1},...,\bar{f}^{l}_{k})$ for $l=1,...,k$ of the linear system of ordinary differential equations on $Q/G$ given by $R_{ij}\bar{f}_{j}-\bar{X}_{0}(\bar{f}_{i})=0,$ (3.22) define $k$ (functionally independent) $G$-invariant horizontal gauge momenta given by ${\mathcal{J}}^{l}=f^{l}_{i}J_{i},$ for $J_{i}={\bf i}_{\xi_{\mbox{\tiny{$\mathcal{M}$}}}}\Theta_{\mbox{\tiny{$\mathcal{M}$}}}$ (the functions defined in (2.11)) and $f^{l}_{i}=\rho^{*}\bar{f}^{l}_{i}$, $l=1,...,k$. ###### Proof. Let us consider the $G$-invariant basis $\mathfrak{B}_{\mathfrak{g}_{S}}=\\{\xi_{1},...,\xi_{k}\\}$ in (2.10) with $Y_{i}=(\xi_{i})_{Q}$ for $i=1,...,k$ and the basis $\mathfrak{B}_{TQ}$ and $\mathfrak{B}_{T^{*}Q}$ in (3.18). Then, from Lemma 3.13 we have that ${\mathcal{J}}=f_{i}J_{i}$, for $f_{i}\in C^{\infty}(Q)^{G}$ is a horizontal gauge momentum if and only if equation (3.20) is satisfied. Since (3.20) is a second order polynomio in the variables $(v^{0},v^{i})$, it is zero when its associated matrix is skew-symmetric, that is when 1. $(i)$ $\kappa(Y_{j},[Y_{i},Y_{l}])=-\kappa(Y_{l},[Y_{i},Y_{j}])$, for all $i,j,l=1,...,k$, 2. $(ii)$ $f_{i}\kappa(X_{0},[Y_{i},X_{0}])=0$, 3. $(iii)$ $P_{0j}=0$, for all $j=1,...,k$. First we observe that items $(i)$ and $(ii)$ are trivially satisfied by the hypotheses of the theorem (item $(i)$ is just the definition of strong invariance). Second, we prove that item $(iii)$ determines the system of ordinary differential equations (3.22) defining the $G$-invariant functions $f_{i}$. Let us define the matrix $[N]$ with entries $N_{lj}=\kappa(Y_{l},[Y_{j},X_{0}])+\kappa(X_{0},[Y_{j},Y_{l}])$ and $[\kappa|_{S}]$ the kinetic energy matrix restricted to $S$ (which is symmetric and invertible with elements $\kappa_{li}$). Then, the condition $P_{0j}=0$ is written in matrix form as $[N]f=[\kappa|_{S}]X_{0}(f)$ for $f=(f_{1},...,f_{k})^{t}$, which is equivalent to $R.f=X_{0}(f)$ for $R$ the matrix with entries $R_{ij}=[\kappa|_{S}]^{il}N_{lj}$. Therefore, item $(iii)$ is satisfied if and only if the functions $f=(f_{1},...,f_{k})$ are a solution of the linear system of differential equations defined on $Q$ $R_{ij}f_{j}-X_{0}(f_{i})=0,\quad\mbox{for each }i=1,...,k.$ (3.23) Since $X_{0}\in\Gamma(H)$ is $\rho_{\mbox{\tiny{$Q$}}}$-projectable, then there is a (globally defined) vector field $\bar{X}_{0}$ on $Q/G$ such that $T\rho_{\mbox{\tiny{$Q$}}}(X_{0})=\bar{X}_{0}$. Moreover, $R_{ij}$ are also $G$-invariant functions ($\kappa,X_{0}$ and $Y_{i}$ are $G$-invariant), and thus we conclude that the system (3.23) is well defined on $Q/G$. That is, (3.23) represents a (globally defined) linear system of $k$ ordinary differential equations for the functions $(\bar{f}_{1},...,\bar{f}_{k})$ on $Q/G$, that is written as $R_{ij}\bar{f}_{j}-\bar{X}_{0}(\bar{f}_{i})=0,\quad\mbox{for each }i=1,...,k.$ (3.24) where $R_{ij}$ are viewed here as functions on $Q/G$. The system (3.24) admits $k$ independent solutions $\bar{f}^{l}=(\bar{f}_{1}^{l},.....,\bar{f}_{k}^{l})$ for $l=1,...,k$. Moreover, $f^{l}=(f_{1}^{l},.....,f_{k}^{l})$ with $f_{i}^{l}=\rho^{*}(\bar{f}_{i}^{l})$ are $k$ independent solutions of (3.23) and hence ${\mathcal{J}}^{l}=f_{i}^{l}J_{i}$ are (functionally independent) $G$-invariant horizontal gauge momenta for $l=1,...,k$. It is important to note that item $(iii)$ is the only item determining the functions $f_{i}$, while the other two items are intrinsic conditions imposed on the nonholonomic system. ∎ ###### Remark 3.16. The momentum equation (3.20) does not depend on the potential energy function but only on the $G$-invariance of it. As a consequence, the horizontal gauge momentum $\mathcal{J}$, defined from Theorem 3.15, is a first integral of $(\mathcal{M},\Omega_{\mbox{\tiny{$\mathcal{M}$}}}|_{\mathcal{C}},H_{\mbox{\tiny{$\mathcal{M}$}}}=\kappa|_{\mbox{\tiny{$\mathcal{M}$}}}+U)$ for any G-invariant potential energy function $U$ on $Q$. Such a property, called weak-Noetherinity, has been first observed and studied in [28, 29, 31]. $\diamond$ ###### Corollary 3.17. Consider a nonholonomic system $(\mathcal{M},\Omega_{\mbox{\tiny{$\mathcal{M}$}}}|_{\mathcal{C}},H_{\mbox{\tiny{$\mathcal{M}$}}})$ with a $G$-symmetry satisfying Conditions $(\mathcal{A}1)$-$(\mathcal{A}4)$ and with a strong invariant kinetic energy on $S$. If the horizontal space $H$, defined in (3.13), is orthogonal to the vertical space $V$ (with respect to the kinetic energy metric), then the system admits automatically $k=\textup{rank}(S)$ $G$-invariant (functionally independent) horizontal gauge momenta. ###### Proof. If $V^{\perp}=H$ then $H$ is $S$-orthogonal and also $\kappa(X_{0},[Y_{i},X_{0}])=0$ for all $i=1,...,k$. Thus we are under the hypothesis of Theorem 3.15. ∎ ###### Remark 3.18. Since it is not always possible to choose $H=V^{\perp}$ with $H\subset D$, in some examples we have to check that $\kappa(X_{0},[X_{0},Y])=0$ for all $Y\in\Gamma(S)$. This condition is equivalently written as $\kappa(X_{0},[X_{0},Y_{i}])=0$ for all $i=1,...,k$ where $Y_{i}=(\xi_{i})_{Q}$ with $\xi_{i}$ elements of the $G$-invariant basis $\mathfrak{B}_{\mathfrak{g}_{S}}$ in (2.10), which is identically expressed as $(\pounds_{X_{0}}\kappa)(X_{0},Y_{i})=0$ or $\kappa(\nabla_{X_{0}}Y_{i},X_{0})=0$ for $\nabla$ the Levi-Civita connection associated to the kinetic energy metric. $\diamond$ Guiding Example: nonholonomic oscillator. The nonholonomic oscillator describes a particle in $Q=S^{1}\times\mathbb{R}\times S^{1}$ with a Lagrangian given by $L=\frac{m}{2}(\dot{x}^{2}+\dot{y}^{2}+\dot{z}^{2})-U(y)$ and constraints in the velocities $\dot{z}=y\dot{x}$. The constraint distribution is given by $D=\textup{span}\\{Y:=\partial_{x}+y\partial_{z},\partial_{y}\\}$. The Lie group $G=S^{1}\times S^{1}$ acts on $Q$ so that $V=\textup{span}\\{\partial_{x},\partial_{z}\\}$ and leaves $D$ and $L$ invariant. Then $S=\textup{span}\\{Y\\}$ and the kinetic energy metric is trivially strong invariant on $S$ since $\textup{rank}(S)=1$ (in fact, it is strong invariant on $V$, see Example 3.10). Moreover, we see that $V^{\perp}=\textup{span}\\{\partial_{y}\\}\subset D$ and hence defining the horizontal space $H:=V^{\perp}$, Corollary 3.17 guarantees the existence of one $G$-invariant horizontal gauge momentum. Next, we will follow Theorem 3.15 to compute the horizontal gauge momentum $\mathcal{J}$ for this example. Let us consider the basis $\mathcal{B}_{TQ}=\\{X_{0}=\partial_{y},Y=\partial_{x}+y\partial_{z},\partial_{z}\\}$ of $TQ$ with coordinates $(v^{0},v^{\mbox{\tiny{$Y$}}},v^{z})$. Observe that this basis induces the vertical complement of the constraints $W=\textup{span}\\{\partial_{z}\\}$. Then on $T^{*}Q$ we have the dual basis $\mathcal{B}_{T^{*}Q}=\textup{span}\\{dy,dx,\epsilon:=dz-ydx\\}$ with coordinates $(p_{0},p_{\mbox{\tiny{$Y$}}},p_{z})$. The constraint submanifold $\mathcal{M}$ is given by $\mathcal{M}=\\{x,y,z,p_{0},p_{\mbox{\tiny{$Y$}}},p_{z})\ :\ p_{z}=\frac{y}{1+y^{2}}p_{\mbox{\tiny{$Y$}}}\\}$. Recall that $G$ acts on $Q$ defining a principal bundle $\rho_{\mbox{\tiny{$Q$}}}:Q\to Q/G$ so that $\rho_{\mbox{\tiny{$Q$}}}(x,y,z)=y$. The Lie algebra of the symmetry group is $\mathfrak{g}=\mathbb{R}^{2}$ and $\mathfrak{g}_{S}=\textup{span}\\{\xi=(1,y)\\}$ while $\mathfrak{g}_{W}=\textup{span}\\{(0,1)\\}$. Following (2.11), the element $\xi\in\Gamma(\mathfrak{g}_{S})$ defines the function $J_{\xi}:=\langle J^{\mbox{\tiny{nh}}},\xi\rangle=p_{\mbox{\tiny{$Y$}}}$ and the horizontal gauge momentum will be written as ${\mathcal{J}}=f(y)p_{\mbox{\tiny{$Y$}}}$ ($f$ is already considered as a $G$-invariant function on $Q$). The momentum equation from Proposition (3.3): The function ${\mathcal{J}}$ is a horizontal gauge momenta if and only if $f$ satisfies that $f(y)\langle J,\sigma_{\mathfrak{g}_{S}}\rangle(\xi_{\mbox{\tiny{$\mathcal{M}$}}},X_{\mbox{\tiny{nh}}})+p_{\mbox{\tiny{$Y$}}}X_{\mbox{\tiny{nh}}}(f)=0$. Since $d^{\mathcal{C}}dx=0$ then $\langle J,d^{\mathcal{C}}dx\otimes\xi\rangle=0$ and thus the momentum equation remains $f(y)\langle J,\mathcal{K}_{\mbox{\tiny{$\mathcal{W}$}}}\rangle(\xi_{\mbox{\tiny{$\mathcal{M}$}}},X_{\mbox{\tiny{nh}}})+p_{\mbox{\tiny{$Y$}}}f^{\prime}(y)=0.$ (3.25) The differential equation of Theorem 3.15: Next, we write the momentum equation in coordinates as it is expressed (3.22). Since $\textup{rank}(S)=1$, the ordinary differential equation to be solved, for $f=f(y)$, is $R_{\mbox{\tiny{$YY$}}}f-f^{\prime}=0$ for $R_{\mbox{\tiny{$YY$}}}=\tfrac{1}{\kappa(Y,Y)}\,\kappa(Y,[Y,\partial_{y}])=-\tfrac{y}{1+y^{2}}.$ Therefore, the solution of the ordinary differential equation $\tfrac{y}{1+y^{2}}f+f^{\prime}=0,$ (3.26) gives the (already known) horizontal gauge momenta ${\mathcal{J}}=\frac{1}{\sqrt{1+y^{2}}}p_{\mbox{\tiny{$Y$}}}$ (which in canonical coordinates gives ${\mathcal{J}}=\sqrt{1+y^{2}}\,p_{x}$). ### 3.4 A geometric interpretation: horizontal gauge symmetries as parallel sections In this section, we will see how a horizontal gauge symmetry can be constructed by parallel transporting an element $\xi_{0}\in(\mathfrak{g}_{S})_{q_{0}}$, for $q_{0}\in Q$, along the dynamics using a specific affine connection. Consider the splitting $TQ=H\oplus S\oplus W$ of the tangent bundle, in which we not only take the distribution $H$ to be $S$-orthogonal, but we also choose the vertical complement $W$ orthogonal to $S$: $W:=S^{\perp}\cap V.$ On the bundle $\mathfrak{g}_{S}\to Q$, we define the affine connection $\widehat{\nabla}:\mathfrak{X}(Q)\times\Gamma(\mathfrak{g}_{S})\to\Gamma(\mathfrak{g}_{S})$ given, at each $X\in\mathfrak{X}(Q)$ and $\xi\in\Gamma(\mathfrak{g}_{S})$, by $\widehat{\nabla}_{X}\,\xi:=A_{S}(\nabla_{X}\,\xi_{\mbox{\tiny{$Q$}}}),$ (3.27) where $\nabla:\mathfrak{X}(Q)\times\mathfrak{X}(Q)\to\mathfrak{X}(Q)$ is the Levi-Civita connection with respect to the kinetic energy metric and $A_{S}:TQ\to\mathfrak{g}$ is the bundle map defined in (3.14). Observe that, since $\textup{Im}(A_{S})=\mathfrak{g}_{S}$, then $\widehat{\nabla}$ is well defined. ###### Remark 3.19. It is straightforward to check that $\widehat{\nabla}$ is, in fact, an affine connection. Moreover, this connection is related with the nonholonomic connection restricted to the bundle $\mathfrak{g}_{S}\to Q$ (see e.g., [18]). $\diamond$ Next, we will modify this affine connection using a gauge transformation.333In this case, the terminology gauge transformation is used to modify an affine connection using gauge theory, [48, 49, 53]. In Section 4.1 a gauge transformation is used to modify almost Poisson brackets. Assuming Conditions $(\mathcal{A}1)$-$(\mathcal{A}4)$, we denote by $\mathfrak{B}_{\mathfrak{g}_{S}}=\\{\xi_{1},...,\xi_{k}\\}$ a global $G$-invariant basis of sections of the bundle $\mathfrak{g}_{S}\to Q$ and we recall the basis $\mathfrak{B}_{TQ}$ and $\mathfrak{B}_{T^{*}Q}$ defined in (3.18): $\mathfrak{B}_{\mathfrak{g}_{S}}=\\{\xi_{1},...,\xi_{k}\\},\qquad\mathfrak{B}_{TQ}=\\{X_{0},Y_{i},Z_{a}\\}\quad\mbox{and}\quad\mathfrak{B}_{T^{*}Q}=\\{X^{0},Y^{i},Z^{a}\\},$ (3.28) where $Y_{i}=(\xi_{i})_{\mbox{\tiny{$Q$}}}$ for $i=1,...,k$. ###### Definition 3.20. The $\Sigma$-connection is the affine connection $\overset{\textit{\tiny{$\Sigma$}}}{\nabla}:\mathfrak{X}(Q)\times\Gamma(\mathfrak{g}_{S})\to\Gamma(\mathfrak{g}_{S})$ defined, for $X\in\mathfrak{X}(Q)$ and $\zeta\in\Gamma(\mathfrak{g}_{S})$, by $\overset{\textit{\tiny{$\Sigma$}}}{\nabla}_{X}\,\zeta:=\widehat{\nabla}_{X}\,\zeta+\Sigma(X,\zeta_{Q})$ where $\widehat{\nabla}$ is the affine connection defined in (3.27) and $\Sigma$ is the $\mathfrak{g}_{S}$-valued bilinear form $\Sigma=\Sigma^{l}\otimes\xi_{l}$ where $\Sigma^{l}$ are the bilinear forms given, in the basis (3.28), by $\Sigma^{l}=-(\widehat{\Gamma}_{0j}^{l}+R_{lj})X^{0}\otimes Y^{j}-\widehat{\Gamma}_{ij}^{l}Y^{i}\otimes Y^{j},$ where $\hat{\Gamma}_{0j}^{l}$ and $\hat{\Gamma}_{ij}^{l}$ are the Christoffel symbols of the affine connection $\widehat{\nabla}$ and $R_{ij}$ are the functions defined in (3.21). ###### Remark 3.21. The $\Sigma$-connection is still an affine connection since $\Sigma$ is a bilinear form, which does not need to be skew-symmetric. For short, we may write $\overset{\textit{\tiny{$\Sigma$}}}{\mathcal{\nabla}}:=\widehat{\nabla}+\Sigma$ and observe that the $\Sigma$-connection is a gauge covariant derivative, [48, 49, 53]. $\diamond$ Next, we show that a horizontal gauge symmetry is a parallel section of $\mathfrak{g}_{S}\to Q$ with respect to the $\Sigma$-connection. For that purpose, let us denote by $c(t)\in\mathcal{M}$ the integral curve of $X_{\mbox{\tiny{nh}}}$ and by $\gamma(t)=\tau_{\mathcal{M}}(c(t))$ the corresponding curve on $Q$. ###### Theorem 3.22. Let $(\mathcal{M},\Omega_{\mbox{\tiny{$\mathcal{M}$}}}|_{\mathcal{C}},H_{\mbox{\tiny{$\mathcal{M}$}}})$ be a nonholonomic system with a $G$-symmetry satisfying Conditions $(\mathcal{A}1)$-$(\mathcal{A}4)$, with a strong invariant kinetic energy on $S$ and such that the horizontal space $H$ in (3.13) is $S$-orthogonal. Let us denote by $\gamma(t)$ the curve on $Q$ given by $\gamma(t):=\tau_{\mbox{\tiny{$\mathcal{M}$}}}(c(t))$ where $c(t)$ is the integral curve of $X_{\emph{{\mbox{\tiny{nh}}}}}$. If $\kappa(X_{0},[Y,X_{0}])=0$ for all $Y\in\Gamma(S)$ and $X_{0}$ a $\rho$-projectable vector field on $Q$ taking values in $H$, then the parallel transport of $\zeta_{0}\in(\mathfrak{g}_{S})_{q_{0}}$, for $q_{0}\in Q$, with respect to the $\Sigma$-connection along the nonholonomic dynamics $\gamma(t)$ on $Q$ passing through $q_{0}$, generates a horizontal gauge symmetry. In other words, if a $G$-invariant section $\zeta\in\Gamma(\mathfrak{g}_{S})$ satisfies that $\zeta(q_{0})=\zeta_{0}$ and $\overset{\textit{\tiny{$\Sigma$}}}{\mathcal{\nabla}}_{\dot{\gamma}(t)}\,\zeta=0,$ then the function ${\mathcal{J}}_{\zeta}=\langle J^{\emph{{\mbox{\tiny{nh}}}}},\zeta\rangle\in C^{\infty}(\mathcal{M})$ is a horizontal gauge momentum. ###### Proof. Denote by $\mathfrak{B}_{\mathfrak{g}_{S}}=\\{\xi_{1},...\xi_{k}\\}$ a global $G$-invariant basis of the bundle $\mathfrak{g}_{S}\to Q$ and then a $G$-invariant section $\zeta$ of $\mathfrak{g}_{S}$ is written as $\zeta=f_{j}\xi_{j}$ for $f_{j}\in C^{\infty}(Q)^{G}$. Since $\dot{\gamma}(t)=T\tau_{\mbox{\tiny{$\mathcal{M}$}}}(X_{\mbox{\tiny{nh}}})=v^{0}X_{0}+v^{i}Y_{i}$, then $(T\tau_{\mbox{\tiny{$\mathcal{M}$}}}X_{\mbox{\tiny{nh}}})(f_{l})=v^{0}X_{0}(f_{l})$ and $\overset{\textit{\tiny{$\Sigma$}}}{\nabla}_{\dot{\gamma}(t)}\zeta=f_{j}\overset{\textit{\tiny{$\Sigma$}}}{\nabla}_{\dot{\gamma}(t)}\xi_{j}+T\tau_{\mbox{\tiny{$\mathcal{M}$}}}(X_{\mbox{\tiny{nh}}})(f_{l})\xi_{l}=v^{0}(f_{j}\Gamma_{0j}^{l}+X_{0}(f_{l}))\xi_{l}+v^{i}f_{j}\Gamma_{ij}^{l}\xi_{l},$ where $\Gamma_{0j}^{l},\Gamma_{ij}^{l}$ are the Christoffel symbols of $\overset{\textit{\tiny{$\Sigma$}}}{\nabla}$ in the basis (3.28), i.e., $\overset{\textit{\tiny{$\Sigma$}}}{\nabla}_{X_{0}}\,\xi_{j}=\Gamma_{0j}^{l}\xi_{l}$ and $\overset{\textit{\tiny{$\Sigma$}}}{\nabla}_{Y_{i}}\,\xi_{j}=\Gamma_{ij}^{l}\xi_{l}$. By the Def. 3.20, $\Gamma_{0j}^{l}=\hat{\Gamma}_{0j}^{l}+\Sigma^{l}(X_{0},(\xi_{j})_{Q})$ and $\Gamma_{ij}^{l}=\hat{\Gamma}_{ij}^{l}+\Sigma^{l}((\xi_{i})_{Q},(\xi_{j})_{Q})$. Then $\Gamma_{0j}^{l}=\hat{\Gamma}_{0j}^{l}+\Sigma^{l}_{0j}=-R_{lj}$ and $\Gamma_{ij}^{l}=\hat{\Gamma}_{ij}^{l}+\Sigma^{l}_{ij}=0$. We conclude that $\overset{\textit{\tiny{$\Sigma$}}}{\nabla}_{\dot{\gamma}(t)}\zeta=0$ if and only if the functions $(f_{1},...,f_{k})$ are a solution of the system $-R_{lj}f_{j}+X_{0}(f_{l})=0$, which means, by Theorem 3.15, that $\zeta=f_{i}\xi_{i}$ is a horizontal gauge symmetry. Observe that we are assuming that $v^{0}\neq 0$ which is true except in a measure zero set. ∎ Guiding Example: nonholonomic oscillator. Let us continue with the example describing the nonholonomic oscillator studied in Section 3.3, but in this case, we will consider $W=S^{\perp}\cap V=\textup{span}\\{Z:=-y\frac{\partial}{\partial x}+\frac{\partial}{\partial z}\\}$ and we recall that $H=\textup{span}\\{X_{0}:=\frac{\partial}{\partial y}\\}$ and $S=\textup{span}\\{Y:=\frac{\partial}{\partial x}+y\frac{\partial}{\partial z}\\}$. Denoting by $\xi=(1,y)$ the $G$-invariant generator of $\Gamma(\mathfrak{g}_{S})$, the Christoffel symbols of $\widehat{\nabla}$ are given by $\widehat{\nabla}_{X_{0}}\xi=\widehat{\Gamma}_{\mbox{\tiny{$0Y$}}}^{\mbox{\tiny{$Y$}}}\xi=\tfrac{y}{1+y^{2}}\xi\quad\mbox{and}\quad\widehat{\nabla}_{Y}\xi=\widehat{\Gamma}_{\mbox{\tiny{$YY$}}}^{\mbox{\tiny{$Y$}}}\xi=0.$ Therefore, we observe that $\overset{\textit{\tiny{$\Sigma$}}}{\mathcal{\nabla}}=\widehat{\nabla}$ since, using Def. 3.20, the $\mathfrak{g}$-valued bilinear form $\Sigma=\Sigma^{\mbox{\tiny{$Y$}}}\otimes\xi=0$, where $\Sigma^{\mbox{\tiny{$Y$}}}=-(\widehat{\Gamma}_{\mbox{\tiny{$0Y$}}}^{\mbox{\tiny{$Y$}}}+R_{\mbox{\tiny{$YY$}}})dy\otimes(\tfrac{1}{1+y^{2}}(dx+ydz))-\widehat{\Gamma}_{\mbox{\tiny{$YY$}}}^{\mbox{\tiny{$Y$}}}\tfrac{1}{(1+y^{2})^{2}}(dx+ydz)\otimes(dx+ydz)=0.$ Following Theorem 3.22, $\zeta=f(y)\xi$ is a $G$-invariant horizontal gauge symmetry if and only if $\widehat{\nabla}_{\dot{\gamma}}\zeta=0$. ## 4 Existence of horizontal gauge momenta and related consequences on the dynamics and geometry of the systems ### 4.1 Integrability and hamiltonization of the reduced dynamics As we saw in Section 2.1, a nonholonomic system $(\mathcal{M},\Omega_{\mbox{\tiny{$\mathcal{M}$}}}|_{\mathcal{C}},H_{\mbox{\tiny{$\mathcal{M}$}}})$ with a $G$-symmetry can be reduced to the quotient manifold $\mathcal{M}/G$ and the reduced dynamics is given by integral curves of the vector field $X_{{\mbox{\tiny{red}}}}$ on $\mathcal{M}/G$ defined in (2.5). Moreover, since the hamiltonian function $H_{\mbox{\tiny{$\mathcal{M}$}}}$ on $\mathcal{M}$ is $G$-invariant as well, it descends to a reduced hamiltonian function $H_{\mbox{\tiny{red}}}$ on the quotient $\mathcal{M}/G$, i.e., $H_{\mbox{\tiny{$\mathcal{M}$}}}=\rho^{*}H_{\mbox{\tiny{red}}}$, and as expected, it is a first integral of $X_{\mbox{\tiny{red}}}$. The following Lemma will be used in the subsequence subsections. ###### Lemma 4.1. If $(\mathcal{M},\Omega_{\mbox{\tiny{$\mathcal{M}$}}}|_{\mathcal{C}},H_{\mbox{\tiny{$\mathcal{M}$}}})$ is a nonholonomic system with a $G$-symmetry satisfying Conditions $(\mathcal{A}1)$, $(\mathcal{A}2)$ and $(\mathcal{A}4)$ then $\textup{dim}(\mathcal{M}/G)=k+2$, where $k=\textup{rank}(S)$. ###### Proof. From (3.15), we have that $D=H\oplus S$ and thus we observe that $\textup{rank}(D)=k+1$, since $\textup{rank}(H)=\textup{dim}(Q/G)=1$ and $\textup{rank}(S)=k$. Then $\textup{dim}(\mathcal{M})=\textup{dim}(Q)+\textup{rank}(D)$ and hence, since $G$ acts on $T^{*}Q$ by the lifted action, $\textup{dim}(\mathcal{M}/G)=\textup{dim}(Q/G)+\textup{rank}(D)=k+2$. ∎ #### Integrability of the reduced system In this Section, we recall the concept of ‘broad integrability’ and we show that the reduced dynamics $X_{\mbox{\tiny{red}}}$ on $\mathcal{M}/G$ of a nonholonomic system $(\mathcal{M},\Omega_{\mbox{\tiny{$\mathcal{M}$}}}|_{\mathcal{C}},H_{\mbox{\tiny{$\mathcal{M}$}}})$ with a $G$-symmetry satisfying the hypotheses of Theorem 3.15, is integrable by quadratures or geometric integrable,444We recall that integrability by quadratures is also called geometric integrability, see [55]. and if some compactness hypothesis are satisfied it is also ‘broadly integrable’. In order to perform our analysis we identify broad integrability, which extends complete, or better non-commutative, integrability outside the Hamiltonian framework, with quasi-periodicity of the dynamics. We base our analysis on the characterization of quasi-periodicity outside the hamiltonian framework, introduced in [13] (see also [34, 25, 61]). ###### Definition 4.2. A vector field $X$ on a manifold $M$ of dimension $n$, is called broad integrable, if 1. $(i)$ there exists a submersion $F=(f_{1},\ldots,f_{n-d}):M\longrightarrow\mathbb{R}^{n-d}$ with compact and connected level sets, whose components $f_{1},\ldots,f_{n-d}$ are first integrals of $X$, i.e. $X(f_{i})=0$, for all $i=1,\ldots,n-d$; 2. $(ii)$ there exists $d$ linearly independent vector fields, $Y_{1},\ldots,Y_{d}$ on $M$ tangent to the level sets of the first integrals (i.e., $Y_{\alpha}(f_{i})=0$ for all $\alpha=1,\ldots,d$ and for all $i=1,\ldots,n-d$) that pairwise commute and commute with $X$.555We recall that the vector fields $Y_{1},\ldots,Y_{d}$ are also called dynamical symmetries of $X$. As in the hamiltonian case, being broad integrable, has important consequences in the characterization of the dynamics and the geometry of the phase space: ###### Theorem 4.3 ([13, 34, 61]). Let $M$ be a manifold of dimension $n$. If the vector field $X$ on $M$ is broad integrable, then * (i) for each $c\in\mathbb{R}^{n-d}$, the level sets $F^{-1}(c)$ of $F$ on $M$ are diffeomorphic to $d$–dimensional tori; * (ii) the flow of $X$ is conjugated to a linear flow on the fibers of $F$. Precisely, for each $c\in\mathbb{R}^{n-d}$, there exists a neighbourhood $\mathcal{U}$ of $F^{-1}(c)$ in $M$ and a diffeomorphism $\displaystyle\Phi:$ $\displaystyle\;\mathcal{U}\longrightarrow F(\mathcal{U})\times\mathbb{T}^{d}$ $\displaystyle m\longrightarrow\Phi(m)=(F(m),\varphi(m))$ which conjugate the flow of $X$ on $\mathcal{U}$ to the linear flow $\dot{F}=0\,,\qquad\dot{\varphi}=\omega(F)\,;$ on $F(\mathcal{U})\times\mathbb{T}^{d}$, for certain functions $\omega_{i}:F(\mathcal{U})\longrightarrow\mathbb{R}$. Now, we go back to our nonholonomic system $(\mathcal{M},\Omega_{\mbox{\tiny{$\mathcal{M}$}}}|_{\mathcal{C}},H_{\mbox{\tiny{$\mathcal{M}$}}})$ with a $G$-symmetry. If we assume that the hypotheses of Theorem 3.15 are satisfied, then the nonholonomic system admits $k=\textup{rank}(S)$ (functionally independent) $G$-invariant horizontal gauge momenta. This fact, plus recalling that $H_{\mbox{\tiny{red}}}$ is a first integral of $X_{\mbox{\tiny{red}}}$ and the fact that reduced manifold $\mathcal{M}/G$ has dimension $k+2$, ensures that the reduced dynamics $X_{\mbox{\tiny{red}}}$ is integrable by quadratures. Moreover, if the joint level sets of the first integrals are connected and compact the reduced dynamics satisfies the hypothesis of Theorem 4.3 and it is then broad integrable on circles. We can summarize these integrability issues as follows. ###### Theorem 4.4. Consider a nonholonomic system $(\mathcal{M},\Omega_{\mbox{\tiny{$\mathcal{M}$}}},H_{\mbox{\tiny{$\mathcal{M}$}}})$ with a $G$-symmetry satisfying Conditions $(\mathcal{A}1)$-$(\mathcal{A}4)$. If the hypotheses of Theorem 3.15 are fulfilled, then 1. $(i)$ The vector field $X_{\emph{{\mbox{\tiny{red}}}}}$ admits $k+1$ (functionally independent) first integrals $\\{\bar{\mathcal{J}}_{1},\ldots,\bar{\mathcal{J}}_{k},H_{\emph{{\mbox{\tiny{red}}}}}\\}$ on $\mathcal{M}/G$, where $H_{\emph{{\mbox{\tiny{red}}}}}$ is the reduced hamiltonian; 2. $(ii)$ The map $F=(\bar{\mathcal{J}}_{1},\ldots,\bar{\mathcal{J}}_{k},H_{\emph{{\mbox{\tiny{red}}}}}):\mathcal{M}/G\longrightarrow\mathbb{R}^{k+1}$ is a surjective submersion. The non equilibrium orbits of the reduced dynamics $X_{\emph{{\mbox{\tiny{red}}}}}$ are given by the joint level sets of $(\bar{\mathcal{J}}_{1},\ldots,\bar{\mathcal{J}}_{k},H_{\emph{{\mbox{\tiny{red}}}}})$, and hence the reduced dynamics is integrable by quadratures; 3. $(iii)$ If the map $F=(\bar{\mathcal{J}}_{1},\ldots,\bar{\mathcal{J}}_{k},H_{\emph{{\mbox{\tiny{red}}}}}):\mathcal{M}/G\longrightarrow\mathbb{R}^{k+1}$ is proper, then the reduced dynamics is broad integrable and the reduced phase space inherits the structure of a $S^{1}$-principal bundle. ###### Proof. Given that $\dim\mathcal{M}/G=k+2$ and that we have $k+1$ (functionally independent) first integrals of the reduced dynamics $X_{\mbox{\tiny{red}}}$, namely the $k$ horizontal gauge momenta $\bar{\mathcal{J}}_{1},\ldots,\bar{\mathcal{J}}_{k}$ from Theorem 3.15 and the reduced Hamiltonian $H_{\mbox{\tiny{red}}}$, the reduced dynamics is integrable by quadratures. Items $(ii)$ and $(iii)$ follow immediately from Definition 4.2 and Theorem 4.3. ∎ #### Hamiltonization The non-hamiltonian character of a nonholonomic system can also be seen by the fact that the dynamics is not described by a symplectic form or a Poisson bracket. More precisely, as we have seen in Section 2.1, the restriction of the 2-form $\Omega_{\mbox{\tiny{$\mathcal{M}$}}}$ on the distribution $\mathcal{C}$ is nondegenerate and hence it allows to define the nonholonomic bracket $\\{\cdot,\cdot\\}_{\mbox{\tiny{nh}}}$ on functions on $\mathcal{M}$ (see [58, 44, 39]), given, for each $f\in C^{\infty}(\mathcal{M})$, by $X_{f}=\\{\cdot,f\\}_{\mbox{\tiny{nh}}}\mbox{ \ if and only if \ }{\bf i}_{X_{f}}\Omega_{\mbox{\tiny{$\mathcal{M}$}}}|_{\mathcal{C}}=df|_{\mathcal{C}},$ (4.29) where $(\cdot)|_{\mathcal{C}}$ denotes the point-wise restriction to $\mathcal{C}$. The nonholonomic bracket is an almost Poisson bracket on $\mathcal{M}$ (see Appendix A for more details) with characteristic distribution given by the nonintegrable distribution $\mathcal{C}$ and we say that it describes the dynamics since the nonholonomic vector field $X_{\mbox{\tiny{nh}}}$ is hamiltonian with respect to the bracket and the hamiltonian function $H_{\mbox{\tiny{$\mathcal{M}$}}}$, i.e., $X_{\mbox{\tiny{nh}}}=\\{\cdot,H_{\mbox{\tiny{$\mathcal{M}$}}}\\}_{\mbox{\tiny{nh}}}.$ (4.30) In this framework, we use the triple $(\mathcal{M},\\{\cdot,\cdot\\}_{\mbox{\tiny{nh}}},H_{\mbox{\tiny{$\mathcal{M}$}}})$ to define a nonholonomic system. If the nonholonomic system admits a $G$-symmetry, then the nonholonomic bracket $\\{\cdot,\cdot\\}_{\mbox{\tiny{nh}}}$ is $G$-invariant and it defines an almost Poisson bracket $\\{\cdot,\cdot\\}_{\mbox{\tiny{red}}}$ on the quotient space $\mathcal{M}/G$ given, for each $\bar{f},\bar{g}\in C^{\infty}(\mathcal{M}/G)$, by $\\{\bar{f},\bar{g}\\}_{\mbox{\tiny{red}}}\circ\rho(m)=\\{\bar{f}\circ\rho,\bar{g}\circ\rho\\}_{\mbox{\tiny{nh}}}(m),\qquad m\in\mathcal{M},$ (4.31) where $\rho:\mathcal{M}\to\mathcal{M}/G$ is, as usual, the orbit projection (see App. A). The reduced bracket $\\{\cdot,\cdot\\}_{\mbox{\tiny{red}}}$ describes the reduced dynamics $X_{\mbox{\tiny{red}}}$ (defined in (2.5)) since $X_{\mbox{\tiny{red}}}=\\{\cdot,H_{\mbox{\tiny{red}}}\\}_{\mbox{\tiny{red}}}.$ The hamiltonization problem studies whether the reduced dynamics $X_{\mbox{\tiny{red}}}$ is hamiltonian with respect to a Poisson bracket on the reduced space $\mathcal{M}/G$ (that might be a different bracket from $\\{\cdot,\cdot\\}_{\mbox{\tiny{red}}}$). One of the most important consequences of Theorem 3.15 is related with the hamiltonization problem as the following theorem shows. ###### Theorem 4.5. If a nonholonomic system $(\mathcal{M},\\{\cdot,\cdot\\}_{\emph{{\mbox{\tiny{nh}}}}},H_{\mbox{\tiny{$\mathcal{M}$}}})$ with a $G$-symmetry verifying Conditions $(\mathcal{A}1)$-$(\mathcal{A}4)$ satisfies the hypotheses of Theorem 3.15, then there exists a rank 2-Poisson bracket $\\{\cdot,\cdot\\}_{\emph{{\mbox{\tiny{red}}}}}^{B_{\emph{\mbox{\tiny{\\!H\\!G\\!M}}}}}$ on $\mathcal{M}/G$ describing the reduced dynamics: $X_{\emph{{\mbox{\tiny{red}}}}}=\\{\cdot,H_{\emph{{\mbox{\tiny{red}}}}}\\}_{\emph{{\mbox{\tiny{red}}}}}^{B_{\emph{\mbox{\tiny{\\!H\\!G\\!M}}}}},$ for $H_{\emph{{\mbox{\tiny{red}}}}}:\mathcal{M}/G\to\mathbb{R}$ the reduced hamiltonian. The problem of finding the bracket $\\{\cdot,\cdot\\}_{\mbox{\tiny{red}}}^{B_{\mbox{\tiny{\\!H\\!G\\!M}}}}$, once $k$ horizontal gauge momenta exist, was already studied in [37, 8]). However here, in the light of the techniques introduced to prove Theorem 3.15, we take a different path to put in evidence the role played by the momentum equation. More precisely, first we study how different choices of a (global $G$-invariant) basis $\mathfrak{B}_{\mathfrak{g}_{S}}$ of $\Gamma(\mathfrak{g}_{S})$ generate different rank 2-Poisson brackets on $\mathcal{M}/G$. If the nonholonomic system admits $k$ (functionally independent $G$-invariant) horizontal gauge symmetries then there will be a rank 2-Poisson bracket $\\{\cdot,\cdot\\}_{\mbox{\tiny{red}}}^{{\mbox{\tiny{$B$}}}_{\mbox{\tiny{H\\!G\\!M}}}}$ that describes the dynamics which is defined by choosing the basis of $\Gamma(\mathfrak{g}_{S})$ given by the horizontal gauge symmetries. Then we show how $\\{\cdot,\cdot\\}_{{\mbox{\tiny{red}}}}^{B_{\mbox{\tiny{\\!H\\!G\\!M}}}}$ depends on the system of differential equations (3.22). For the basic definitions regarding Poisson brackets, bivector fields and gauge transformations see Appendix A. Let us consider a 2-form $B$ on $\mathcal{M}$ that is semi-basic with respect to the bundle $\tau_{\mbox{\tiny{$\mathcal{M}$}}}:\mathcal{M}\to Q$. The gauge transformation of $\\{\cdot,\cdot\\}_{\mbox{\tiny{nh}}}$ by the 2-form $B$ gives the almost Poisson bracket $\\{\cdot,\cdot\\}_{\mbox{\tiny{$B$}}}$ defined, at each $f\in C^{\infty}(\mathcal{M})$, by ${\bf i}_{X_{f}}(\Omega_{\mbox{\tiny{$\mathcal{M}$}}}+B)|_{\mathcal{C}}=df|_{\mathcal{C}}\quad\mbox{if and only if}\quad X_{f}=\\{\cdot,f\\}_{\mbox{\tiny{$B$}}}.$ If the 2-form $B$ is $G$-invariant, then the bracket $\\{\cdot,\cdot\\}_{\mbox{\tiny{$B$}}}$ is also $G$-invariant and it can be reduced to an almost Poisson bracket $\\{\cdot,\cdot\\}_{{\mbox{\tiny{red}}}}^{{\mbox{\tiny{$B$}}}}$ on the quotient manifold $\mathcal{M}/G$ given, at each $\bar{f},\bar{g}\in C^{\infty}(\mathcal{M}/G)$, by $\\{\bar{f},\bar{g}\\}^{\mbox{\tiny{$B$}}}_{\mbox{\tiny{red}}}\circ\rho(m)=\\{\bar{f}\circ\rho,\bar{g}\circ\rho\\}_{\mbox{\tiny{$B$}}}(m),$ (4.32) where $m\in\mathcal{M}$, Diag. (A.55) (see also [36, 6]). Let $\mathfrak{B}_{\mathfrak{g}_{S}}$ be a global $G$-invariant basis of $\Gamma(\mathfrak{g}_{S})$ and recall from (2.11) the associated $G$-invariant momenta $J_{i}$. ###### Proposition 4.6. Consider a nonholonomic system $(\mathcal{M},\\{\cdot,\cdot\\}_{\emph{{\mbox{\tiny{nh}}}}},H_{\mbox{\tiny{$\mathcal{M}$}}})$ with a $G$-symmetry satisfying Conditions $(\mathcal{A}1)$-$(\mathcal{A}3)$. Given a (global $G$-invariant) basis $\mathfrak{B}_{\mathfrak{g}_{S}}$ of $\Gamma(\mathfrak{g}_{S})$, the associated 2-form $B_{\sigma}=\langle J,\sigma_{\mathfrak{g}_{S}}\rangle$ induces a gauge transformation of the nonholonomic bracket $\\{\cdot,\cdot\\}_{\emph{{\mbox{\tiny{nh}}}}}$ so that * $(i)$ the gauge related bracket $\\{\cdot,\cdot\\}_{B_{\sigma}}$ on $\mathcal{M}$ is $G$-invariant; * $(ii)$ The induced reduced bracket $\\{\cdot,\cdot\\}_{\emph{{\mbox{\tiny{red}}}}}^{B_{\sigma}}$ on $\mathcal{M}/G$ is Poisson with symplectic leaves given by the common level sets of the momenta $\bar{J}_{i}$, where $\bar{J}_{i}\in C^{\infty}(\mathcal{M}/G)$ so that $\rho^{*}\bar{J}_{i}=J_{i}$. In particular, if Condition $(\mathcal{A}4)$ is satisfied, then the Poisson bracket $\\{\cdot,\cdot\\}_{B_{\sigma}}$ has 2-dimensional leaves. ###### Proof. $(i)$ By construction, we see that the 2-form $\langle J,\sigma_{g_{S}}\rangle$ is semi-basic with respect to the bundle $\mathcal{M}\to Q$ and, by Lemma 3.2, it is $G$-invariant as well. Therefore, the gauge transformation by the 2-form $\langle J,\sigma_{g_{S}}\rangle$ defines a $G$-invariant almost Poisson bracket $\\{\cdot,\cdot\\}_{B_{\sigma}}$. $(ii)$ The $G$-invariant bracket $\\{\cdot,\cdot\\}_{B_{\sigma}}$ induces, on the quotient space $\mathcal{M}/G$, an almost Poisson bracket $\\{\cdot,\cdot\\}_{\mbox{\tiny{red}}}^{B_{\sigma}}$. It is shown666In the notation of [8], $B_{\sigma}$ corresponds to the 2-form $B_{1}$ but for any $G$-invariant basis of $\Gamma(\mathfrak{g}_{S})$. The bracket $\\{\cdot,\cdot\\}_{\mbox{\tiny{red}}}^{B_{\sigma}}$ is denoted by $\\{\cdot,\cdot\\}_{\mbox{\tiny{red}}}^{1}$ in the cited reference. in [8, Prop.3.9] that $\\{\cdot,\cdot\\}_{\mbox{\tiny{red}}}^{B_{\sigma}}$ is a Poisson bracket with symplectic leaves given by the common level sets of the momenta $\bar{J}_{i}\in C^{\infty}(\mathcal{M}/G)$. ∎ Note that the reduced nonholonomic vector field $X_{\mbox{\tiny{red}}}$ might not be tangent to the foliation of the bracket $\\{\cdot,\cdot\\}_{\mbox{\tiny{red}}}^{B_{\sigma}}$. ###### Definition 4.7. We say that a nonholonomic system $(\mathcal{M},\\{\cdot,\cdot\\}_{\mbox{\tiny{nh}}},H_{\mbox{\tiny{$\mathcal{M}$}}})$ with a $G$-symmetry is hamiltonizable by a gauge transformation if there exists a $G$-invariant 2-form $B$ so that $\\{\cdot,\cdot\\}_{\mbox{\tiny{red}}}^{\mbox{\tiny{$B$}}}$ is Poisson777In more generality the bracket $\\{\cdot,\cdot\\}_{\mbox{\tiny{red}}}^{\mbox{\tiny{$B$}}}$ can be conformally Poisson and $X_{\mbox{\tiny{red}}}=\\{\cdot,H_{\mbox{\tiny{red}}}\\}_{\mbox{\tiny{red}}}^{\mbox{\tiny{$B$}}},$ (4.33) for $H_{\mbox{\tiny{red}}}:\mathcal{M}\to\mathbb{R}$ the reduced hamiltonian. ###### Definition 4.8. [6] A gauge transformation by a 2-form $B$ of the nonholonomic bracket $\\{\cdot,\cdot\\}_{\mbox{\tiny{nh}}}$ is dynamical if $B$ is semi-basic with respect to the bundle $\mathcal{M}\to Q$ and ${\bf i}_{X_{\mbox{\tiny{nh}}}}B=0.$ That is, if $B$ induces a bracket $\\{\cdot,\cdot\\}_{{\mbox{\tiny{$B$}}}}$ that describes the nonholonomic dynamics: $X_{\mbox{\tiny{nh}}}=\\{\cdot,H_{\mbox{\tiny{$\mathcal{M}$}}}\\}_{\mbox{\tiny{$B$}}}.$ Therefore, once we know that different 2-forms of the type $B_{\sigma}$ produce different Poisson brackets on the reduced space, we need to find the one that is dynamical, if it exists. Observe that if the system admits $k$ ($G$-invariant) horizontal gauge momenta, then we have a preferred basis ${\mathfrak{B}}_{\mbox{\tiny{HGS}}}=\\{\zeta_{1},...,\zeta_{k}\\}$ of $\Gamma(\mathfrak{g}_{S})$ given by the horizontal gauge symmetries. Let us denote by $\sigma_{\mbox{\tiny{HGS}}}$ the 2-form $\sigma_{\mathfrak{g}_{S}}$ (defined in (3.16)), computed with respect to the basis ${\mathfrak{B}}_{\mbox{\tiny{HGS}}}$ and $B_{\mbox{\tiny{HGS}}}:=\langle J,\sigma_{\mbox{\tiny{HGS}}}\rangle$. The proof of Theorem 4.5 is based on the following two facts: on the one hand, $B_{\mbox{\tiny{HGS}}}$ defines a dynamical gauge transformation and on the other hand (by Proposition 4.6) the resulting reduced bracket $\\{\cdot,\cdot\\}_{{\mbox{\tiny{red}}}}^{B_{\mbox{\tiny{H\\!G\\!M}}}}$ is Poisson. Proof of Theorem 4.5. Under the hypotheses of Theorem 3.15, the nonholonomic system admits $k$ $G$-invariant horizontal gauge momenta $\\{\mathcal{J}_{1},...,\mathcal{J}_{k}\\}$ with the corresponding $G$-invariant horizontal gauge symmetries that generate a basis $\mathfrak{B}_{\mbox{\tiny{HGS}}}=\\{\zeta_{1},...\zeta_{k}\\}$ of $\Gamma(\mathfrak{g}_{S})$. Following [8, Thm. 3.7] and, in particular [8, Corollary 3.13] since $\textup{rank}(H)=1$, the 2-form $B_{\mbox{\tiny{HGS}}}=\langle J,\sigma_{\mbox{\tiny{HGS}}}\rangle$ associated to the basis $\mathfrak{B}_{\mbox{\tiny{HGS}}}$ induces a dynamical gauge transformation and hence the induced reduced bracket $\\{\cdot,\cdot\\}_{{\mbox{\tiny{red}}}}^{B_{\mbox{\tiny{H\\!G\\!M}}}}$ describes the reduced dynamics: $X_{\mbox{\tiny{red}}}=\\{\cdot,H_{\mbox{\tiny{red}}}\\}_{{\mbox{\tiny{red}}}}^{B_{\mbox{\tiny{H\\!G\\!M}}}}$. This bracket is then Poisson with symplectic leaves defined by the common level sets of the horizontal gauge momenta $\\{\mathcal{J}_{1},...,\mathcal{J}_{k}\\}$ (Proposition 4.6). $\square$ The following diagrams compare Proposition 4.6 with Theorem 4.5. The first diagram illustrates the case when we perform a gauge transformation by a 2-form $B_{\sigma}$ (associated to the choice of a basis $\mathfrak{B}_{\mathfrak{g}_{S}}$ of $\Gamma(\mathfrak{g}_{S})$, Proposition 4.6) while the second one illustrates the case when the 2-form is $B_{\mbox{\tiny{HGS}}}$ (associated to the basis $\mathfrak{B}_{\mbox{\tiny{HGS}}}$ given by horizontal gauge momenta, Theorem 4.5). In both cases, we obtain that the resulting reduced brackets $\\{\cdot,\cdot\\}_{\mbox{\tiny{red}}}^{{\mbox{\tiny{$B$}}}_{\sigma}}$ and $\\{\cdot,\cdot\\}_{\mbox{\tiny{red}}}^{B_{\mbox{\tiny{H\\!G\\!M}}}}$ are Poisson. However, $\\{\cdot,\cdot\\}_{\mbox{\tiny{red}}}^{{\mbox{\tiny{$B$}}}_{\sigma}}$ might not describe the reduced dynamics since $B_{\sigma}$ is not necessarily dynamical. On the other hand, $B_{\mbox{\tiny{HGS}}}$ is always dynamical and thus the reduced bracket $\\{\cdot,\cdot\\}_{\mbox{\tiny{red}}}^{B_{\mbox{\tiny{H\\!G\\!M}}}}$ describes the dynamics: $X_{\mbox{\tiny{red}}}=\\{\cdot,H_{\mbox{\tiny{red}}}\\}_{\mbox{\tiny{red}}}^{B_{\mbox{\tiny{H\\!G\\!M}}}}$. ###### Remark 4.9. Under the hypotheses of Theorem 4.5, the functions $\\{H_{\mbox{\tiny{$\mathcal{M}$}}},\mathcal{J}_{1},...,\mathcal{J}_{k}\\}$ are in involution with respect to the bracket $\\{\cdot,\cdot\\}_{B_{\mbox{\tiny{H\\!G\\!M}}}}$, where $\\{\mathcal{J}_{1},...,\mathcal{J}_{k}\\}$ are the horizontal gauge momenta defined by Theorem 3.15. In addition, also the reduced functions $\\{H_{{\mbox{\tiny{red}}}},\bar{\mathcal{J}}_{1},...,\bar{\mathcal{J}}_{k}\\}$ on $\mathcal{M}/G$ are in involution with respect to the reduced bracket $\\{\cdot,\cdot\\}_{{\mbox{\tiny{red}}}}^{B_{\mbox{\tiny{H\\!G\\!M}}}}$. However these functions are not necessarily in involution with respect to the brackets $\\{\cdot,\cdot\\}_{\mbox{\tiny{nh}}}$ and $\\{\cdot,\cdot\\}_{\mbox{\tiny{red}}}$ respectively. $\diamond$ In many cases, the horizontal gauge symmetries cannot be explicitly written, instead they are defined in terms of the solutions of the system of differential equations (3.22). Next Theorem gives the formula to write explicitly the dynamical gauge transformation $B_{\mbox{\tiny{HGS}}}$ (and as a consequence the Poisson bracket $\\{\cdot,\cdot\\}_{{\mbox{\tiny{red}}}}^{B_{\mbox{\tiny{H\\!G\\!M}}}}$) in a chosen basis $\mathfrak{B}_{\mathfrak{g}_{S}}$ that is not necessarily given by the horizontal gauge symmetries. Examples 5.2 and 5.3 make explicit the importance of the following formula. ###### Theorem 4.10. Consider a nonholonomic system described by the triple $(\mathcal{M},\\{\cdot,\cdot\\}_{\emph{{\mbox{\tiny{nh}}}}},H_{\mbox{\tiny{$\mathcal{M}$}}})$ with a $G$-symmetry verifying Conditions $(\mathcal{A}1)$-$(\mathcal{A}4)$. Let $\mathfrak{B}_{\mathfrak{g}_{S}}=\\{\xi_{1},...,\xi_{k}\\}$ be a global $G$-invariant basis of $\Gamma(\mathfrak{g}_{S})$ and $X_{0}$ a $\rho$-projectable vector field on $Q$ generating the $S$-orthogonal horizontal space $H$. If the hypotheses of Theorem 3.15 are satisfied, then the 2-form $B_{\mbox{\tiny{HGS}}}$ is written with respect to the basis $\mathfrak{B}_{\mathfrak{g}_{S}}$ as $\begin{split}B_{\mbox{\tiny{HGS}}}:=\langle J,\sigma_{\mbox{\tiny{HGS}}}\rangle&=\langle J,\mathcal{K}_{\mbox{\tiny{$\mathcal{W}$}}}\rangle-\langle J,R_{ij}\mathcal{X}^{0}\wedge\mathcal{Y}^{j}\otimes\xi_{j}\rangle+\langle J,d\mathcal{Y}^{i}\otimes\xi_{i}\rangle,\\\ &=p_{a}d^{\mathcal{C}}\varepsilon^{a}-J_{i}R_{ij}\mathcal{X}^{0}\wedge\mathcal{Y}^{j}+J_{i}d^{\mathcal{C}}\mathcal{Y}^{i},\end{split}$ (4.34) for $R_{ij}$ and $J_{i}$ the functions defined in (3.21) and (2.11) respectively, and $\mathcal{X}^{0}=\tau_{\mbox{\tiny{$\mathcal{M}$}}}^{*}X^{0}$, $\mathcal{Y}^{i}=\tau_{\mbox{\tiny{$\mathcal{M}$}}}^{*}Y^{i}$, $\varepsilon^{a}=\tau_{\mbox{\tiny{$\mathcal{M}$}}}^{*}\epsilon^{a}$ the corresponding forms on $\mathcal{M}$. ###### Proof. In order to prove formula (4.34), consider the basis $\mathfrak{B}_{\mathfrak{g}_{S}}=\\{\xi_{1},...,\xi_{k}\\}$ (not necessarily given by horizontal gauge symmetries), and define the corresponding functions $J_{i}$ as in (2.11). If we denote by $F$ the fundamental matrix of solutions of the system of ordinary differential equations (3.22) (i.e., the columns of $F$ are the independent solutions $(f_{1}^{l},...,f_{k}^{l})$) and by $R$ the $k\times k$-matrix with entries $R_{ij}$, then $R.F=X_{0}(F)\quad\mbox{and}\quad\mathcal{J}=F^{T}{\bf J},\quad\mbox{where}\quad\mathcal{J}=\left(\\!\begin{matrix}[c]\mathcal{J}_{1}\\\ \vdots\\\ \mathcal{J}_{k}\end{matrix}\\!\right)\quad\mbox{and}\quad{\bf J}=\left(\\!\begin{matrix}[c]J_{1}\\\ \vdots\\\ J_{k}\end{matrix}\\!\right).$ (4.35) Moreover, let us denote by ${\mathcal{Y}}^{i}_{\mbox{\tiny{HGS}}}$ the 1-forms on $\mathcal{M}$ such that ${\mathcal{Y}}^{i}_{\mbox{\tiny{HGS}}}((\zeta_{l})_{\mbox{\tiny{$\mathcal{M}$}}})=\delta_{il}$ and ${\mathcal{Y}}^{i}_{\mbox{\tiny{HGS}}}|_{\mathcal{H}}={\mathcal{Y}}^{i}_{\mbox{\tiny{HGS}}}|_{\mathcal{W}}=0$. Then if ${\mathcal{Y}}_{\mbox{\tiny{HGS}}}=({\mathcal{Y}}^{1}_{\mbox{\tiny{HGS}}},...,{\mathcal{Y}}^{k}_{\mbox{\tiny{HGS}}})^{T}$ we have that ${\mathcal{Y}}_{\mbox{\tiny{HGS}}}=F^{-1}{\mathcal{Y}}$ where $\mathcal{Y}=({\mathcal{Y}}^{1},...,{\mathcal{Y}}^{k})^{T}$. Hence $\begin{split}\langle J,d^{\mathcal{C}}{\mathcal{Y}}^{i}_{\mbox{\tiny{HGS}}}\otimes\zeta_{i}\rangle=&\ \mathcal{J}^{T}.\,d^{\mathcal{C}}{\mathcal{Y}}_{\mbox{\tiny{HGS}}}={\bf J}^{T}Fd^{\mathcal{C}}(F^{-1}{\mathcal{Y}})={\bf J}^{T}FX_{0}(F^{-1})\mathcal{X}^{0}\wedge{\mathcal{Y}}+{\bf J}^{T}FF^{-1}d^{\mathcal{C}}{\mathcal{Y}}\\\ =&-{\bf J}^{T}F(F^{-1}X_{0}(F)F^{-1})\mathcal{X}^{0}\wedge{\mathcal{Y}}+{\bf J}^{T}d^{\mathcal{C}}{\mathcal{Y}}=-{\bf J}^{T}R\mathcal{X}^{0}\wedge{\mathcal{Y}}+{\bf J}^{T}d^{\mathcal{C}}{\mathcal{Y}}\\\ =&-J_{i}R_{ij}\mathcal{X}^{0}\wedge{\mathcal{Y}}^{j}+\langle J,d^{\mathcal{C}}{\mathcal{Y}}^{i}\otimes\xi_{i}\rangle.\end{split}$ Finally, we conclude, using Definition 3.1, that $B_{\mbox{\tiny{HGS}}}=\langle J,\mathcal{K}_{\mbox{\tiny{$\mathcal{W}$}}}\rangle+\langle J,d^{\mathcal{C}}{\mathcal{Y}}^{i}_{\mbox{\tiny{HGS}}}\otimes\zeta_{i}\rangle=p_{a}d^{\mathcal{C}}\varepsilon^{a}-J_{i}R_{ij}\mathcal{X}^{0}\wedge\mathcal{Y}^{j}+J_{i}d^{\mathcal{C}}\mathcal{Y}^{i}.$ ∎ Following Example 3.11 and Corollary 3.5, next we observe that a system that admits a basis of $\mathfrak{g}_{S}\to Q$ given by $G$-invariant horizontal symmetries is hamiltonizable without the need of a gauge transformation (i.e., $B_{\mbox{\tiny{HGS}}}=0$ in this case). ###### Corollary 4.11 (of Theorem 4.5 and Corollary 3.5, Horizontal symmetries). Let $(\mathcal{M},\\{\cdot,\cdot\\}_{\emph{{\mbox{\tiny{nh}}}}},H_{\mbox{\tiny{$\mathcal{M}$}}})$ be a nonholonomic system with a $G$-symmetry satisfying Conditions $\mathcal{A}$ and with the bundle $\mathfrak{g}_{S}\to Q$ admitting a basis of $G$-invariant horizontal symmetries. Then, the reduced bracket $\\{\cdot,\cdot\\}_{\emph{{\mbox{\tiny{red}}}}}$ on $\mathcal{M}/G$ is twisted Poisson with characteristic distribution given by the common level sets of the horizontal gauge momenta. If Condition $(\mathcal{A}4)$ is fulfilled, $\\{\cdot,\cdot\\}_{\emph{{\mbox{\tiny{red}}}}}$ is a $\textup{rank}\,2$-Poisson bracket. ###### Proof. It can be observed from (4.34) that $B_{\mbox{\tiny{H\\!G\\!M}}}=0$ when the basis $\mathfrak{B}_{\mathfrak{g}_{S}}$ is given by constant sections (this was also proven in [8]). However, it is easier to see a direct proof of this fact: if $\eta\in\mathfrak{g}$ is a horizontal symmetry, then ${\bf i}_{\eta_{\mbox{\tiny{$\mathcal{M}$}}}}\Omega_{\mbox{\tiny{$\mathcal{M}$}}}|_{\mathcal{C}}=dJ_{\eta}|_{\mathcal{C}}$, thus $\pi_{\mbox{\tiny{nh}}}^{\sharp}(dJ_{\eta})=-\eta_{\mbox{\tiny{$\mathcal{M}$}}}$ and hence $\pi_{\mbox{\tiny{red}}}^{\sharp}(d\bar{J_{\eta}})=0$. Then the reduced bracket $\\{\cdot,\cdot\\}_{\mbox{\tiny{red}}}$ admits $k$ Casimirs. Since the rank of the characteristic distribution of $\\{\cdot,\cdot\\}_{\mbox{\tiny{red}}}$ is $dim(\mathcal{M}/G)-k$, by Lemma 4.1 we conclude that its characteristic distribution integrable and given by the common level sets of the horizontal gauge momenta. Following Remark A.2, the reduced bracket $\\{\cdot,\cdot\\}_{\mbox{\tiny{red}}}$ is twisted Poisson. Since Condition $(\mathcal{A}4)$ implies that $\textup{rank}(H)=1$, then $dim(\mathcal{M}/G)=2+k$ and thus the characteristic distribution of $\\{\cdot,\cdot\\}_{\mbox{\tiny{red}}}$ has 2-dimensional leaves. Therefore, the foliation is symplectic and $\\{\cdot,\cdot\\}_{\mbox{\tiny{red}}}$ is Poisson. ∎ ### 4.2 Horizontal gauge momenta and broad integrability of the complete system In the previous subsections we have studied the dynamics and the geometry of the reduced system. Under the hypotheses of Theorem 3.15 the reduced dynamics is integrable by quadratures, and if the joint level sets of the first integrals are connected and compact the reduced dynamics consists of periodic orbits or equilibria. Moreover the reduced system is hamiltonizable via a rank-2 Poisson structure, whose (global) Casimirs are the $k$ horizontal gauge momenta. In this Section we aim to obtain information on the dynamics and geometry of the complete system. We will then focus in the case in which the reduced dynamics is periodic and, by using techniques of reconstruction theory, we will see that if the symmetry group $G$ is compact, then the dynamics of the complete systems is quasi-periodic on tori of dimension at most rank$\,G+1$, where rank$\,G$ denotes the rank of the group, i.e. the dimension of the maximal abelian subgroup of $G$. If the symmetry group $G$ is not compact, the complete dynamics can be either quasi-periodic on tori or an unbounded copy of $\mathbb{R}$, depending on the symmetry group. Some details on these aspects are reviewed in Appendix B, but see also [2, 33]. We thus show how the broad integrability of the complete dynamics of these type of systems is deeply related to their symmetries, that are able to produce, not only the right amount of dynamical symmetries, but also the complementary number of first integrals. We will then apply these results to the example of a heavy homogeneous ball that rolls without sliding inside a convex surface of revolution (see Section 5.3). This case presents a periodic dynamics in the reduced space, and a broadly integrable complete dynamics on tori of dimension at most three, thus re-obtaining the results in [38, 26]. We say that a $G$-invariant subset $\mathcal{P}$ of $\mathcal{M}$ is a relative periodic orbit for $X_{\mbox{\tiny{nh}}}$, if it invariant by the flow and its projection on $\mathcal{M}/G$ is a periodic orbit of $X_{\mbox{\tiny{red}}}$. Now, we can summarize these results as follows. ###### Theorem 4.12. Let us consider a nonholonomic system $(\mathcal{M},\Omega_{\mbox{\tiny{$\mathcal{M}$}}}|_{\mathcal{C}},H_{\mbox{\tiny{$\mathcal{M}$}}})$ with a $G$-symmetry satisfying Conditions $(\mathcal{A}1)$-$(\mathcal{A}4)$. Assume that the hypotheses of Theorem 3.15 are fulfilled, and that the reduced dynamics is periodic, then 1. $(i)$ if the group $G$ is compact, the flow of $X_{\emph{{\mbox{\tiny{nh}}}}}$ on a relative periodic orbit $\mathcal{P}$ is quasi–periodic with at most $rank\,G+1$ frequencies and the phase space if fibered in tori of dimension up to rank$\,G+1$. 2. $(ii)$ if $G$ is non–compact, the flow of $X_{\emph{{\mbox{\tiny{nh}}}}}$ over a periodic orbit is either quasi–periodic, or a copy of $\mathbb{R}$, that leaves every compact subset of $\mathcal{P}$.888From now on we will call escaping a dynamical behaviour that leaves every compact subset of $\mathcal{P}$. ###### Proof. To prove this result we combine the results on integrability of the reduced system given by Theorem 4.4 with the results on reconstruction theory from periodic orbits recalled in Appendix B. More precisely, we confine ourselves to the subspace of the reduced space $\mathcal{M}/G$ in which the dynamics is periodic. Then, if the symmetry group is compact, the reconstructed dynamics is generically quasi-periodic on tori of dimension $d+1$, where $r$ is the rank of the group [35, 42, 38, 23]. The phase space, or at least a certain region of it, has the structure of a $\mathbb{T}^{d+1}$ fiber bundle, (see [26] for details on the geometric structure of the phase space in this case). On the other hand if the group is not compact, the reconstructed orbits are quasi-periodic or a copy of $\mathbb{R}$ that ‘spirals’ toward a certain direction. ∎ ## 5 Examples ### 5.1 The snakeboard The snakeboard is a derivation of the skateboard where the rider is allowed to generate a rotation in the axis of the wheels creating a torque so that the board spins about a vertical axis, see [51, 12]. We denote by $r$ the distance from the center of the board to the pivot point of the wheel axes, by $m$ the mass of the board, by $\mathbb{J}$ the inertial of the rotor and by ${\mathbb{J}}_{1}$ the inertia of each wheel. Following [12] we assume that the parameters are chosen such that $\mathbb{J}+2\mathbb{J}_{1}+\mathbb{J}_{0}=mr^{2}$, where $\mathbb{J}_{0}$ denotes the inertia of the board. The snakeboard is then modelled on the manifold $Q=SE(2)\times S^{1}\times S^{1}$ with coordinates $q=(\theta,x,y,\psi,\phi)$, where $(\theta,x,y)$ represent the position and orientation of the board, $\psi$ is the angle of the rotor with respect to the board, and $\phi$ is the angle of the front and back wheels with respect to the board (in this simplified model they are assumed to be equal). $r$ Figure 1: The snakeboard. The Lagrangian is given by $L(q,\dot{q})=\frac{1}{2}m(\dot{x}^{2}+\dot{y}^{2}+r^{2}\dot{\theta}^{2})+\frac{1}{2}{\mathbb{J}}\dot{\psi}^{2}+{\mathbb{J}}\dot{\psi}\dot{\theta}+{\mathbb{J}}_{0}\dot{\phi}^{2}.$ The nonholonomic constraints impose that the front and back wheels roll without sliding and hence the constraint 1-forms are defined to be $\begin{split}\omega^{1}&=-\sin(\theta+\phi)\,dx+\cos(\theta+\phi)\,dy-r\cos\phi\,d\theta,\\\ \omega^{2}&=-\sin(\theta-\phi)\,dx+\cos(\theta-\phi)\,dy+r\cos\phi\,d\theta.\end{split}$ (5.36) Note that $\omega^{1}$ and $\omega^{2}$ are independent whenever $\phi\neq\pm\pi/2$. Therefore, we define the configuration manifold $Q$ so that $q=SE(2)\times S^{1}\times(-\pi/2,\pi/2)$. The constraint distribution $D$ is given by $D=\textup{span}\\{Y_{\theta}:=\sin\phi\partial_{\theta}-r\cos\phi\cos\theta\partial_{x}-r\cos\phi\sin\theta\partial_{y},\,\partial_{\psi},\,\partial_{\phi}\\}.$ (5.37) The existence of horizontal gauge momenta. The system is invariant with respect to the free and proper action on $Q$ of $G=SE(2)\times S^{1}$ given by $\Phi((\alpha,a,b;\beta),(\theta,x,y,\psi,\phi))=(\theta+\alpha,x\cos\alpha-y\sin\alpha+a,x\sin\alpha+y\cos\alpha+b,\psi+\beta,\phi),$ and hence $V=\textup{span}\\{\partial_{\theta},\partial_{\psi},\partial_{x},\partial_{y}\\}$ and $S=\textup{span}\\{Y_{\theta},\partial_{\psi}\\}$ (see [12]). First, we observe that $[Y_{\theta},\partial_{\psi}]=0$ and hence the kinetic energy metric is trivially strong invariant on $S$. Second, $H:=\textup{span}\\{\partial_{\phi}\\}$ and it is straightforward to check that $V^{\perp}=H$. Then, by Corollary 3.17$(i)$ the system admits 2 (functionally independent) $G$-invariant horizontal gauge momenta. The computation of the of horizontal gauge momenta. Let us consider the adapted basis to $TQ=D\oplus W$, given by $\mathfrak{B}_{TQ}=\\{Y_{\theta},\partial_{\psi},\partial_{\phi},Z_{1},Z_{2}\\}$, where $Z_{1}:=\frac{1}{2\cos\phi}\left(-\sin\theta\partial_{x}+\cos\theta\partial_{y}-\frac{1}{r}\partial_{\theta}\right)\qquad\mbox{and}\qquad Z_{2}:=\frac{1}{2\cos\phi}\left(-\sin\theta\partial_{x}+\cos\theta\partial_{y}+\frac{1}{r}\partial_{\theta}\right).$ Denoting by $(p_{\theta},p_{\psi},p_{\phi},p_{1},p_{2})$ the coordinates on $T^{*}Q$ associated to the dual basis $\mathfrak{B}_{T^{*}Q}=\\{\alpha_{\theta}:=-\tfrac{1}{r\cos\phi}(\cos\theta dx+\sin\theta dy),d\psi,d\phi,\omega^{1},\omega^{2}\\},$ we obtain that $\mathcal{M}=\left\\{(q;p_{\theta},p_{\psi},p_{\phi},p_{1},p_{2})\ :\ p_{1}=-p_{2}=-\tfrac{1}{2}\Big{(}\tfrac{(mr^{2}-\mathbb{J})\sin\phi}{r\cos\phi\,\Delta}p_{\theta}+\tfrac{mr\cos\phi}{\Delta}p_{\psi}\Big{)}\right\\},$ where $\Delta=\Delta(\phi)=mr^{2}-\mathbb{J}\sin^{2}\phi$ (recall that $\Delta(\phi)>0$, since $mr^{2}>\mathbb{J}$). We consider the global basis of $\mathfrak{g}_{S}$ given by $\mathfrak{B}_{\mathfrak{g}_{S}}=\\{\xi_{1}=(\sin\phi,-r\cos\phi\cos\theta+y,-r\cos\phi\sin\theta-x;0),\xi_{2}=(0,0,0;1)\\}$, and we observe that $(\xi_{1})_{\mbox{\tiny{$Q$}}}=Y_{\theta}$ and $(\xi_{2})_{\mbox{\tiny{$Q$}}}=\partial_{\psi}$. Following (2.11), $J_{1}=\langle J^{\mbox{\tiny{nh}}},\xi_{1}\rangle=p_{\theta}$ and $J_{2}=\langle J^{\mbox{\tiny{nh}}},\xi_{2}\rangle=p_{\psi}$. The function $\mathcal{J}=f_{\theta}(\phi)p_{\theta}+f_{\psi}(\phi)p_{\psi}$ is a horizontal gauge momentum if and only if $R.f=f^{\prime}$ where $R$ is the $2\times 2$ matrix given in (3.22), $f=(f_{\theta},f_{\psi})^{t}$ and $f^{\prime}=(f^{\prime}_{\theta},f^{\prime}_{\psi})$ for $f^{\prime}_{\theta}=\tfrac{d}{d\phi}f_{\theta}$ (analogously for $f^{\prime}_{\psi}$). In our case, using that $\\{Y_{\theta},\partial_{\psi}\\}$ is a basis of $S$ and $X_{0}=\partial_{\phi}$, we obtain $R=[\kappa|_{S}]^{-1}N,\qquad\mbox{for}\ [\kappa|_{S}]=\left(\\!\begin{matrix}mr^{2}&\mathbb{J}\sin\phi\\\ \mathbb{J}\sin\phi&\mathbb{J}\end{matrix}\\!\right)\ \mbox{and}\ N=\left(\\!\begin{matrix}0&\ \ 0\\\ -\mathbb{J}\cos\phi&\ \ 0\end{matrix}\\!\right).$ Hence, we arrive to the linear system $\tfrac{\cos\phi}{\Delta}\left(\\!\begin{array}[]{cc}\mathbb{J}\sin\phi&0\\\ -mr^{2}\cos\phi&0\end{array}\\!\right)\left(\\!\begin{array}[]{c}f_{\theta}\\\ f_{\psi}\end{array}\\!\right)=\left(\\!\begin{array}[]{c}f^{\prime}_{\theta}\\\ f^{\prime}_{\psi}\end{array}\\!\right),$ (5.38) which admits 2 independent solutions: $f^{1}=(f^{1}_{\theta},f^{1}_{\psi})$, with $f^{1}_{\theta}=\frac{1}{\sqrt{2\Delta}}$, $f^{1}_{\psi}=-f^{1}_{\theta}\sin\phi$, and $f^{2}=(0,1)$. Therefore the horizontal gauge momenta can be written as $\mathcal{J}_{1}=\tfrac{1}{\sqrt{2\Delta}}\;(p_{\theta}-p_{\psi}\,\sin\phi)\qquad\mbox{and}\qquad\mathcal{J}_{2}=p_{\psi}.$ (5.39) ###### Remarks 5.1. 1. $(i)$ On the one hand, since $\xi_{2}$ is a horizontal symmetry, it is expected to have $\mathcal{J}_{2}=p_{\psi}$ conserved (Cor. 3.5). On the other hand, the horizontal gauge momentum $\mathcal{J}_{1}$ is realized by a non-constant section $\zeta_{1}$ and, as far as we could search, $\mathcal{J}_{1}$ has not appeared in the literature yet. Moreover, using that $H_{\mbox{\tiny{$\mathcal{M}$}}}=\frac{1}{2}\left(\frac{p_{\theta}^{2}}{\Delta}\,-2\frac{\sin\phi}{\Delta}\,p_{\theta}p_{\psi}+\frac{mr^{2}}{\mathbb{J}\,\Delta}\,p_{\psi}^{2}+\frac{p_{\phi}^{2}}{2\mathbb{J}_{0}}\,\right),$ it is possible to check our results. 2. $(ii)$ The horizontal gauge momenta (5.39) can also be obtain from the momentum equation in Proposition 3.3, which in case is written as $f_{\theta}\langle J,\sigma_{\mathfrak{g}_{S}}\rangle(Y_{\theta},X_{\mbox{\tiny{nh}}})+f_{\psi}\langle J,\sigma_{\mathfrak{g}_{S}}\rangle(\partial_{\psi},X_{\mbox{\tiny{nh}}})+p_{\theta}X_{\mbox{\tiny{nh}}}(f_{\theta})+p_{\psi}X_{\mbox{\tiny{nh}}}(f_{\psi})=0.$ $\diamond$ Hamiltonization and integrability. The system descends to the quotient manifold $\mathcal{M}/G$ equipped with coordinates $(\phi,p_{\phi},p_{\theta},p_{\psi})$. The $G$-invariant horizontal gauge momenta ${\mathcal{J}}_{1},{\mathcal{J}}_{2}$ in (5.39) and the hamiltonian function $H_{\mbox{\tiny{$\mathcal{M}$}}}$, also descend to functions $\bar{\mathcal{J}}_{1},\bar{\mathcal{J}}_{2}$ and $H_{\mbox{\tiny{red}}}$ on $\mathcal{M}/G$. Integrability. Since the reduced space $\mathcal{M}/G$ is $4$-dimensional, Theorem 4.4 guarantees that the reduced dynamics is integrable by quadratures. We observe that the reduced system is not periodic, thus we can say nothing generic on the complete dynamics or on the geometry of the phase space. Hamiltonization. Theorem 4.5 guarantees that the system is Hamiltonizable. In order to write the Poisson bracket on $\mathcal{M}/G$ that describes the dynamics, we compute the 2-form $B_{\mbox{\tiny{HGS}}}$ in terms of the basis $\mathfrak{B}_{TQ}=\\{Y_{1}:=Y_{\theta},Y_{2}:=\partial_{\psi},X_{0}:=\partial_{\phi},\partial_{x},\partial_{y}\\}$ using Theorem 4.10. Let us denote by $R_{ij}$ the elements of the matrix $R$ in (5.38), and then $B_{\mbox{\tiny{HGS}}}=\langle J,\mathcal{K}_{\mbox{\tiny{$\mathcal{W}$}}}\rangle- p_{\theta}(R_{11}d\phi\wedge d\theta+R_{12}d\phi\wedge d\psi)-p_{\psi}(R_{21}d\phi\wedge d\theta+R_{22}d\phi\wedge d\psi)+p_{\theta}d\alpha_{\theta}.$ First, we observe that $\langle J,\mathcal{K}_{\mbox{\tiny{$\mathcal{W}$}}}\rangle|_{\mathcal{C}}=\iota^{*}(p_{1})d\omega^{1}+\iota^{*}(p_{2})d\omega^{2}|_{\mathcal{C}}=-\left(\tfrac{(mr^{2}-\mathbb{J})\sin\phi}{\cos\phi\,\Delta}p_{\theta}+\tfrac{mr^{2}\cos\phi}{\Delta}p_{\psi}\right)\,d\phi\wedge\alpha_{\theta}|_{\mathcal{C}}$ Second, we observe that $(R_{11}p_{\theta}+R_{21}p_{\psi})d\phi\wedge\alpha_{\theta}=\left(\tfrac{\mathbb{J}\sin\phi\cos\phi}{\Delta}p_{\theta}-\tfrac{mr^{2}\cos\phi}{\Delta}p_{\psi}\right)d\phi\wedge\alpha_{\theta}.$ Finally, using that $p_{\theta}d\alpha_{\theta}|_{\mathcal{C}}=p_{\theta}\tan\phi\,d\phi\wedge\alpha_{\theta}$ we obtain that $B_{\mbox{\tiny{HGS}}}=0$. As a consequence of Theorem 4.5 the reduced bracket $\pi_{\mbox{\tiny{red}}}$ which is given by $\pi_{\mbox{\tiny{red}}}=\partial_{\phi}\wedge\partial_{p_{\phi}}+\tfrac{\cos\phi}{\Delta}(\mathbb{J}\sin\phi\,p_{\theta}-mr^{2}p_{\psi})\partial_{p_{\phi}}\wedge\partial_{p_{\theta}},$ is a Poisson bracket on $\mathcal{M}/G$ with $\bar{\mathcal{J}}_{1}$ and $\bar{\mathcal{J}}_{2}$ playing the role of Casimirs. The reduced nonholonomic vector field is then $X_{\mbox{\tiny{red}}}=\\{\cdot,H_{\mbox{\tiny{red}}}\\}_{\mbox{\tiny{red}}}\,.$ ###### Remark 5.2. The $G$-symmetry considered in this paper is different than the one considered in [8, 3], therefore the reduced bracket obtained here is not the same as the one presented in these citations. Moreover, in [8, 3], the snakeboard was described by a twisted Poisson bracket (with a 4-dimensional foliation) while here, we show that the snakeboard can be described by a rank 2-Poisson bracket. $\diamond$ The horizontal gauge momenta as parallel sections. Consider the basis $\bar{\mathfrak{B}}_{TQ}=\\{Y_{1}:=Y_{\theta},Y_{2}:=\partial_{\psi},X_{0}:=\partial_{\phi},\bar{Z}_{1},\bar{Z}_{2}\\}$ where $\bar{Z}_{1},\bar{Z}_{2}$ generate the distribution $W=S^{\perp}\cap V$. The Christoffel symbols of the affine connection $\hat{\nabla}$ coincide with the ones of the Levi-Civita connection and then $\hat{\Gamma}_{01}^{1}=-\tfrac{\mathbb{J}\sin\phi\,\cos\phi}{\Delta}\,,\qquad\hat{\Gamma}_{01}^{2}=\tfrac{mr^{2}\cos\phi}{\Delta}\qquad\mbox{and}\qquad\hat{\Gamma}_{02}^{1}=-\hat{\Gamma}_{02}^{2}=0.$ Following Def. 3.20 we get that $\Sigma=\Sigma^{\theta}\otimes\xi_{1}+\Sigma^{\psi}\otimes\xi_{2}=0$. Therefore, $\overset{\textit{\tiny{$\Sigma$}}}{\nabla}=\hat{\nabla}$ and then the horizontal gauge symmetries $\zeta=f_{1}(\phi)\xi_{1}+f_{2}(\phi)\xi_{2}$ is determined by the condition that they are parallel along the dynamics with respect to the $\hat{\nabla}$ connection, i.e., $\hat{\nabla}_{\dot{\gamma}}\zeta=0,$ (5.40) for $\dot{\gamma}=T\tau_{\mbox{\tiny{$\mathcal{M}$}}}(X_{\mbox{\tiny{nh}}})$. ### 5.2 Solids of Revolution Let $\mathcal{B}$ be a strongly convex body of revolution, i.e., a body which is geometrically and dynamically symmetric under rotations about a given axis ([23, 4]). Let us assume that the surface ${\bf S}$ of $\mathcal{B}$ is invariant under rotations around a given axis, which in our case is chosen to be $e_{3}$. Then its principal moments of inertia are $\mathbb{I}_{1}=\mathbb{I}_{2}$ and $\mathbb{I}_{3}$. Figure 2: Solid of revolution rolling on a horizontal plane. The position of the body in $\mathbb{R}^{3}$ is given by the coordinates $(g,{\bf x})$ where $g\in SO(3)$ is the orientation of the body with respect to an inertial frame $(e_{x},e_{y},e_{z})$ and ${\bf x}=(x,y,z)\in\mathbb{R}^{3}$ is the position of the center of mass. Denoting by ${\bf m}$ the mass of the body, the lagrangian $L:T(SO(3)\times\mathbb{R}^{3})\to\mathbb{R}$ is given by $L(g,{\bf x};\boldsymbol{\Omega},\dot{\bf x})=\frac{1}{2}\langle\mathbb{I}\boldsymbol{\Omega},\boldsymbol{\Omega}\rangle+\frac{1}{2}{\bf m}||\dot{\bf x}||^{2}+{\bf m}{\bf g}\langle{\bf x},e_{3}\rangle,$ where $\boldsymbol{\Omega}=(\Omega_{1},\Omega_{2},\Omega_{3})$ is the angular velocity in body coordinates, $\langle\cdot,\cdot\rangle$ represents the standard pairing in $\mathbb{R}^{3}$ and ${\bf g}$ the constant of gravity. Let $s$ be the vector from the center of mass of the body to a fixed point on the surface ${\bf S}$. If we denote by $\boldsymbol{\gamma}=(\gamma_{1},\gamma_{2},\gamma_{3})$ the third row of the matrix $g\in SO(3)$, then $s$ can be written as $s:S^{2}\to{\bf S}$ so that $s(\boldsymbol{\gamma})=(\varrho(\gamma_{3})\gamma_{1},\varrho(\gamma_{3})\gamma_{2},\zeta(\gamma_{3})),$ where $\varrho$ and $\zeta$ are the smooth functions defined in [23]. Therefore $s(\boldsymbol{\gamma})=\varrho\boldsymbol{\gamma}-Le_{3},$ where $\varrho=\varrho(\gamma_{3})$, $\zeta=\zeta(\gamma_{3})$ and $L=L(\gamma_{3})=\varrho\gamma_{3}-\zeta$. The configuration space is described as $Q=\\{(g,{\bf x})\in SO(3)\times\mathbb{R}^{3}\ :\ z=-\langle\boldsymbol{\gamma},s\rangle\\},$ and it is diffeomorphic to $SO(3)\times\mathbb{R}^{2}$. The nonholonomic constraint describing the rolling without sliding are written as $\boldsymbol{\Omega}\times s+{\bf b}=0,$ where ${\bf b}=g^{t}\dot{\bf x}$ (with $g^{t}$ the transpose of $g$). Let us consider the (local) basis of $TQ$ given by $\\{X_{1}^{L},X_{2}^{L},X_{3}^{L},\partial_{x},\partial_{y}\\}$, where $X_{i}^{L}$ are the left invariant vector fields on $SO(3)$ and we denote the corresponding coordinates on $TQ$ by $(\boldsymbol{\Omega},\dot{x},\dot{y}).$ Then the constraint distribution $D$ is given by $D=\textup{span}\\{X_{1},X_{2},X_{3}\\}$ where $X_{i}:=X_{i}^{L}+(\boldsymbol{\alpha}\times s)_{i}\partial_{x}+(\boldsymbol{\beta}\times s)_{i}\partial_{y}+(\boldsymbol{\gamma}\times s)_{i}\partial_{z},$ for $\boldsymbol{\alpha}$ and $\boldsymbol{\beta}$ the first and second rows of the matrix $g\in SO(3)$. The constraints 1-forms are $\epsilon^{1}=dx-\langle\boldsymbol{\alpha},s\times\boldsymbol{\lambda}\rangle\quad\mbox{and}\quad\epsilon^{2}=dy-\langle\boldsymbol{\beta},s\times\boldsymbol{\lambda}\rangle,$ where $\boldsymbol{\lambda}=(\lambda_{1},\lambda_{2},\lambda_{3})$ are the (Maurer-Cartan) 1-forms on $SO(3)$ dual to the left invariant vector fields $\\{X_{L}^{1},X_{L}^{2},X_{L}^{3}\\}$. The symmetries. The Lagrangian and the constraints are invariant with respect to the action of the special Euclidean group $SE(2)$ acting on $Q$, at each $(g;x,y)\in Q$, by $\Psi((h;a,b)),(g;x,y))=(\tilde{h}.g;h.(x,y)^{t}+(a,b)^{t})\,,$ where $h\in SO(2)$ is an orthogonal $2\times 2$ matrix and $\tilde{h}={\mbox{\scriptsize{$\left(\begin{array}[]{cc}h&0\\\\[-3.0pt] 0&1\end{array}\right)$}}}\in SO(3)$. The symmetry of the body makes also the system invariant with respect to the right $S^{1}$-action on $Q$ given by $\Psi_{S^{1}}(h_{\theta},(g,x,y))=(g\tilde{h}_{\theta}^{-1},h_{\theta}(x,y)^{t})$, where we identify $\theta\in S^{1}$ with the orthogonal matrix $h_{\theta}\in SO(2)$. Therefore, the symmetry group of the system is the Lie group $G=S^{1}\times SE(2)$, with associated Lie algebra $\mathfrak{g}\simeq\mathbb{R}\times\mathbb{R}\times\mathbb{R}^{2}$. The vertical space $V$ is given by $V=\textup{span}\\{(\eta_{1})_{\mbox{\tiny{$Q$}}}=-X_{3}^{L}-y\partial_{x}+x\partial_{y},\ (\eta_{2})_{\mbox{\tiny{$Q$}}}=\langle\boldsymbol{\gamma},{\bf X}^{L}\rangle-y\partial_{x}+x\partial_{y},\ (\eta_{3})_{\mbox{\tiny{$Q$}}}=\partial_{x},\ (\eta_{4})_{\mbox{\tiny{$Q$}}}=\partial_{y}\\},$ where $\eta_{i}$ are the canonical Lie algebra elements in $\mathfrak{g}$ and ${\bf X}^{L}=(X_{1}^{L},X_{2}^{L},X_{3}^{L})$. We observe that the action is not free, since $(\eta_{i})_{\mbox{\tiny{$Q$}}}(g,x,y)$ are not linearly independent at $\gamma_{3}=1$. We check that the dimension assumption (2.3) is satisfied: $TQ=D+V$. Let us choose $W=\textup{span}\\{\partial_{x},\partial_{y}\\}$ as vertical complement of the constraints and then the basis of $TQ$ adapted to the splitting (2.6) is ${\bf B}_{TQ}=\\{X_{1},X_{2},X_{3},\partial_{x},\partial_{y}\\}$, with dual basis given by ${\bf B}_{T^{*}Q}=\\{\lambda_{1},\lambda_{2},\lambda_{3},\epsilon^{1},\epsilon^{2}\\}$. The associated coordinates on $T_{q}^{*}Q$ are $({\bf M},K_{1},K_{2})$ for ${\bf M}=(M_{1},M_{2},M_{3})$ and the submanifold $\mathcal{M}$ of $T^{*}Q$ is then described by $\mathcal{M}=\\{(g,x,y;{\bf M},K_{1},K_{2})\ :\ K_{1}={\bf m}\langle\boldsymbol{\alpha},s\times\boldsymbol{\Omega}\rangle,\quad K_{2}={\bf m}\langle\boldsymbol{\beta},s\times\boldsymbol{\Omega}\rangle\\},$ (5.41) where ${\bf M}=\mathbb{I}\boldsymbol{\Omega}+ms\times(\boldsymbol{\Omega}\times s)$. The horizontal gauge momenta are functions on $\mathcal{M}$ linear in the coordinates $M_{i}$. The existence of horizontal gauge momenta. First, we observe that the $G$-action satisfies Conditions $(\mathcal{A}1)$-$(\mathcal{A}4)$ outside $\gamma_{3}=\pm 1$ and thus, in what follows, we will work on the manifolds $\widetilde{Q}\subset Q$ and $\widetilde{\mathcal{M}}\subset\mathcal{M}$ defined by the condition $\gamma_{3}\neq\pm 1$. Second, we consider the splitting $T\widetilde{Q}=H\oplus S\oplus W,$ (5.42) where $S=D\cap V=\textup{span}\\{Y_{1}:=X_{3},Y_{2}:=\langle\gamma,{\bf X}\rangle\\}$, with ${\bf X}=(X_{1},X_{2},X_{3})$ and $H$ is generated by $X_{0}=\gamma_{1}X_{2}-\gamma_{2}X_{1}$ (observe that $H=S^{\perp}\cap D$). Now, we check that the kinetic energy is strong invariant on $S$: in this case, it is enough to see that $\kappa([Y_{1},Y_{2}],Y_{1})=0$ and $\kappa([Y_{1},Y_{2}],Y_{2})=0$. These two facts are easily verified using simply that $[X_{i}^{L},X_{j}^{L}]=X_{k}^{L}$ for $i,j,k$ cyclic permutations of $1,2,3$. In the same way, we also check that $\kappa(X_{0},[Y_{i},X_{0}])=0$, for $i=1,2$. Therefore, by Theorem 3.15, we conclude that the system admits $2=\textup{rank}(S)$ $G$-invariant (functionally independent) horizontal gauge momenta $\mathcal{J}_{1}$, $\mathcal{J}_{2}$ on $\widetilde{\mathcal{M}}$ (recovering the results in [16, 23]). The computation of the 2 horizontal gauge momenta. In order to compute the horizontal gauge momenta, we consider the basis $\mathfrak{B}_{\mathfrak{g}_{S}}$ of $\Gamma(\mathfrak{g}_{S}\to\widetilde{Q})$, defined by $\mathfrak{B}_{\mathfrak{g}_{S}}=\\{\xi_{1}:=(1;0,(h_{1},h_{2})),\xi_{2}:=(0;1,(g_{1},g_{2}))\\},$ where $h_{1}=h_{1}(g,x,y)=y+\varrho\beta_{3}$, $h_{2}=h_{2}(g,x,y)=-x-\varrho\alpha_{3}$ and $g_{1}=g_{1}(g,x,y)=y-L\beta_{3}$, $g_{2}=g_{2}(g,x,y)=-x+L\alpha_{3}$. The components of the nonholonomic momentum map, in the basis $\mathfrak{B}_{\mathfrak{g}_{S}}$, are given by $J_{1}=\langle{\mathcal{J}}^{\mbox{\tiny{nh}}},\xi_{1}\rangle={\bf i}_{(\xi_{1})_{\mbox{\tiny{$\mathcal{M}$}}}}\Theta_{\mbox{\tiny{$\mathcal{M}$}}}=-M_{3}\qquad\mbox{and}\qquad J_{2}=\langle{\mathcal{J}}^{\mbox{\tiny{nh}}},\xi_{2}\rangle={\bf i}_{(\xi_{2})_{\mbox{\tiny{$\mathcal{M}$}}}}\Theta_{\mbox{\tiny{$\mathcal{M}$}}}=\langle\boldsymbol{\gamma},{\bf M}\rangle,$ where we are using that $(\xi_{1})_{Q}=Y_{1}$ and $(\xi_{2})_{Q}=Y_{2}$, see (2.11). Then, a function $\mathcal{J}=f_{1}J_{1}+f_{2}J_{2}$ is a horizontal gauge momentum if and only if the coordinate functions $(f_{1},f_{2})$ satisfy the momentum equation (3.17) $f_{1}\langle J,\sigma_{\mathfrak{g}_{S}}\rangle(\mathcal{Y}_{1},X_{\mbox{\tiny{nh}}})+f_{2}\langle J,\sigma_{\mathfrak{g}_{S}}\rangle(\mathcal{Y}_{2},X_{\mbox{\tiny{nh}}})-M_{3}X_{\mbox{\tiny{nh}}}(f_{1})+\langle\boldsymbol{\gamma},{\bf M}\rangle X_{\mbox{\tiny{nh}}}(f_{2})=0.$ That is, considering the basis, $\mathfrak{B}_{T\widetilde{Q}}=\\{X_{0},\ Y_{1},\ Y_{2},\ \partial_{x},\ \partial_{y}\\}$, the $G$-invariant coordinate functions $(f_{1}=f_{1}(\gamma_{3}),f_{2}=f_{2}(\gamma_{3}))$ are the solutions of the system of ordinary differential equations (defined on $\widetilde{Q}/G$) $R\left(\\!\\!\begin{array}[]{c}f_{1}\\\ f_{2}\end{array}\\!\\!\right)=\left(\\!\\!\begin{array}[]{c}\bar{X}_{0}(f_{1})\\\ \bar{X}_{0}(f_{2})\end{array}\\!\\!\right),\qquad\mbox{for}\ R=[\kappa|_{S}]^{-1}[N],$ (5.43) where $\bar{X}_{0}=T\rho_{\widetilde{Q}}(X_{0})=(1-\gamma_{3}^{2})\partial_{\gamma_{3}}$, the matrix $[N]$ has elements $N_{lj}=\kappa(Y_{l},[Y_{i},X_{0}])-\kappa(X_{0},[Y_{i},Y_{l}])$ that in this case gives $[N]=m(1-\gamma_{3}^{2})\left(\\!\\!\begin{array}[]{cc}-\varrho A&\varrho(B-\langle\boldsymbol{\gamma},s\rangle)\\\ LA-\varrho\langle\boldsymbol{\gamma},s\rangle&-LB\end{array}\\!\\!\right)$ for $A=\varrho^{\prime}(1-\gamma_{3}^{2})-\varrho\gamma_{3}$ and $B=L^{\prime}(1-\gamma_{3}^{2})-L\gamma_{3}-\langle\boldsymbol{\gamma},s\rangle$ (with $(\cdot)^{\prime}=\tfrac{d}{d\gamma_{3}}(\cdot)$) and $[\kappa|_{S}]=\left(\\!\\!\begin{array}[]{cc}\mathbb{I}_{3}+m\varrho^{2}(1-\gamma_{3}^{2})&-\mathbb{I}_{3}\gamma_{3}-Lm\varrho(1-\gamma_{3}^{2})\\\ -\mathbb{I}_{3}\gamma_{3}-Lm\varrho(1-\gamma_{3}^{2})&\langle\boldsymbol{\gamma},\mathbb{I}\boldsymbol{\gamma}\rangle+L^{2}m(1-\gamma_{3}^{2})\end{array}\\!\\!\right).$ The system (5.43) admits two independent solutions $\bar{f}^{1}=(\bar{f}^{1}_{1},\bar{f}^{1}_{2})$ and $\bar{f}^{2}=(\bar{f}^{2}_{1},\bar{f}^{2}_{2})$ on $\widetilde{Q}/G$ and therefore we conclude that the two ($G$-invariant) horizontal gauge momenta $\mathcal{J}_{1}$ and ${\mathcal{J}}_{2}$ are $\mathcal{J}_{1}=-f^{1}_{1}M_{3}+f^{1}_{2}\langle\boldsymbol{\gamma},{\bf M}\rangle\quad\mbox{and}\quad{\mathcal{J}}_{2}=-f^{2}_{1}M_{3}+f^{2}_{2}\langle\boldsymbol{\gamma},{\bf M}\rangle,$ (5.44) where $f^{i}_{j}=\rho^{*}\bar{f}^{i}_{j}$ for $i,j=1,2$. ###### Remark 5.3. 1. $(i)$ For $f=(f_{1},f_{2})$, the system (5.43) is equivalently written as $(1-\gamma_{3}^{2})^{-1}Rf=f^{\prime}$. Therefore, we recover the system of ordinary differential equations from [16, 23, 4] (and [9] for the special case of the Tippe-Top and of the rolling disk). 2. $(ii)$ The $G$-invariant horizontal gauge momenta ${\mathcal{J}}_{1}$, ${\mathcal{J}}_{2}$ descend to the quotient $\widetilde{\mathcal{M}}/G$ as functions $\bar{\mathcal{J}}_{1}$, $\bar{\mathcal{J}}_{2}$ that are functionally independent. It has been proven in [23] that the functions $\bar{\mathcal{J}}_{1}$, $\bar{\mathcal{J}}_{2}$ can be extended to the whole differential space $\mathcal{M}/G$. In this case, it makes sense to talk about $2=\textup{rank}(\mathfrak{g}_{S})$ horizontal gauge momenta. $\diamond$ Integrability and hamiltonization. The nonholonomic dynamics $X_{\mbox{\tiny{nh}}}$ defined on $\widetilde{\mathcal{M}}$ can be reduced to $\widetilde{\mathcal{M}}/G$ obtaining the vector field $X_{\mbox{\tiny{red}}}$ (see (2.5)). Using the basis $\mathfrak{B}_{T\widetilde{Q}}=\\{X_{0},Y_{1},Y_{2},\partial_{x},\partial_{y}\\}$ and its dual basis of $T^{*}\widetilde{Q}$ $\mathfrak{B}_{T^{*}\widetilde{Q}}=\left\\{X^{0}:=\frac{\gamma_{1}\lambda_{2}-\gamma_{2}\lambda_{1}}{1-\gamma_{3}^{2}},\ Y^{1}:=\gamma_{3}\frac{\gamma_{1}\lambda_{1}+\gamma_{2}\lambda_{2}}{1-\gamma_{3}^{2}}-\lambda_{3},\ Y^{2}:=\frac{\gamma_{1}\lambda_{1}+\gamma_{2}\lambda_{2}}{1-\gamma_{3}^{2}},\ \epsilon^{1},\ \epsilon^{2}\right\\},$ (5.45) we denote by $(v^{0},v^{1},v^{2},v^{x},v^{y})$ and $(p_{0},p_{1},p_{2},K_{1},K_{2})$ the associated coordinates on $T\widetilde{Q}$ and $T^{*}\widetilde{Q}$ respectively. The reduced manifold $\widetilde{\mathcal{M}}/G$ is represented by the coordinates $(\gamma_{3},p_{0},p_{1},p_{2})$. Integrability. Theorem 4.4 guarantees that the reduced system on $\widetilde{\mathcal{M}}/G$ admits three functionally independent first integrals, namely two horizontal gauge momenta $\bar{\mathcal{J}}_{1}$ and $\bar{\mathcal{J}}_{2}$, and the reduced energy $H_{\mbox{\tiny{red}}}$. Since $\textup{dim}(\widetilde{\mathcal{M}}/G)=4$, the reduced dynamics is integrable by quadratures. However, the reduced dynamics is not generically periodic, and therefore we can say nothing generic on the complete dynamics or on the geometry of the phase space. Hamiltonization. Even though the hamiltonization of this example has been studied in [4, 37], here we see it as a direct consequence of Theorem 3.15. That is, since this nonholonomic system satisfies the hypotheses of Theorem 3.15, it is hamiltonizable by a gauge transformation (Def. 4.7). The reduced bracket $\\{\cdot,\cdot\\}_{\mbox{\tiny{red}}}^{B_{\mbox{\tiny{HGM}}}}$ on $\widetilde{\mathcal{M}}/G$ defines a rank-2 Poisson structure, with 2-dimensional leaves given by the common level sets of $\bar{\mathcal{J}}_{1}$ and $\bar{\mathcal{J}}_{2}$, that describes the (reduced) dynamics. In what follows we show how the 2-form $B_{\mbox{\tiny{HGM}}}$, inducing the dynamical gauge transformation that defines $\\{\cdot,\cdot\\}_{\mbox{\tiny{red}}}^{B_{\mbox{\tiny{HGM}}}}$, depends directly on the ordinary system of differential equations (5.43). Consider the basis $\mathfrak{B}_{T\widetilde{Q}}$ and $\mathfrak{B}_{T^{*}\widetilde{Q}}$ given in (5.45) and following Theorem 4.10, $B_{\mbox{\tiny{HGM}}}=\langle J,\sigma_{\mbox{\tiny{HGM}}}\rangle=\langle J,\mathcal{K}_{\mbox{\tiny{$\mathcal{W}$}}}\rangle- J_{i}R_{ij}\mathcal{X}^{0}\wedge\mathcal{Y}^{j}+J_{i}d\mathcal{Y}^{i},$ where $\mathcal{X}^{0}=\tau^{*}_{\tilde{\mbox{\tiny{$\mathcal{M}$}}}}X^{0}$ and $\mathcal{Y}^{i}=\tau^{*}_{\tilde{\mbox{\tiny{$\mathcal{M}$}}}}Y^{i}$ for $i=1,2$ are the corresponding 1-forms on $\widetilde{\mathcal{M}}$. Using (5.41) we have that (see [4]), $\begin{split}\langle J,\mathcal{K}_{\mbox{\tiny{$\mathcal{W}$}}}\rangle|_{\mathcal{C}}&=K_{1}\,d\epsilon^{1}|_{\mathcal{C}}+K_{2}\,d\epsilon^{2}|_{\mathcal{C}}\\\ &=m\varrho\langle\boldsymbol{\gamma},s\rangle\langle\boldsymbol{\Omega},d\boldsymbol{\lambda}\rangle-m(\varrho^{2}\langle\boldsymbol{\Omega},\boldsymbol{\gamma}\rangle+\varrho^{\prime}c_{3})\langle\boldsymbol{\gamma},d\boldsymbol{\lambda}\rangle+m(\varrho L\langle\boldsymbol{\Omega},\boldsymbol{\gamma}\rangle+L^{\prime}c_{3})d\lambda_{3}|_{\mathcal{C}}.\end{split}$ Now, recalling the definition of $X^{0}$, $Y^{1}$ and $Y^{2}$ in $\mathfrak{B}_{T^{*}Q}$ (5.45), we compute the term $\begin{split}J_{i}R_{ij}{\mathcal{X}}^{0}\wedge\mathcal{Y}^{j}&=J_{i}\,R_{i1}{\mathcal{X}}^{0}\wedge\mathcal{Y}^{1}+J_{i}\,R_{i2}{\mathcal{X}}^{0}\wedge\mathcal{Y}^{2}\\\ &=(1-\gamma_{3}^{2})^{-1}(v^{l}N_{l1}\langle\boldsymbol{\gamma},d\boldsymbol{\lambda}\rangle+v^{l}N_{l2}\,d\lambda_{3}),\\\ &=-m(\varrho^{2}\langle\boldsymbol{\Omega},\boldsymbol{\gamma}\rangle+\varrho^{\prime}c_{3})\langle\boldsymbol{\gamma},d\boldsymbol{\lambda}\rangle+m(\varrho L\langle\boldsymbol{\Omega},\boldsymbol{\gamma}\rangle+L^{\prime}c_{3})d\lambda_{3}.\end{split}$ where we use that $v^{1}=(1-\gamma_{3}^{2})^{-1}(\langle\boldsymbol{\gamma},\boldsymbol{\Omega}\rangle\gamma_{3}-\Omega_{3})$ and $v^{2}=(1-\gamma_{3}^{2})^{-1}(\langle\boldsymbol{\gamma},\boldsymbol{\Omega}\rangle-\gamma_{3}\Omega_{3})$. Finally, since $dY^{i}=0$ for $i=1,2$, we obtain that $B_{\mbox{\tiny{HGM}}}=m\varrho\langle\boldsymbol{\gamma},s\rangle\langle\boldsymbol{\Omega},d\boldsymbol{\lambda}\rangle,$ recovering the dynamical gauge transformation from [4, 37]. For the explicit formulas for the brackets, see [4]. ###### Remarks 5.4. 1. $(i)$ Since the $G$-action on $\mathcal{M}$ is proper but not free, the quotient $\mathcal{M}/G$ is a stratified differential space, [23, 4] with a 4 dimensional regular stratum given by $\widetilde{\mathcal{M}}/G$ and a 1-dimensional singular stratum, associated to $S^{1}$-isotropy type, that is described by the condition $\gamma_{3}=\pm 1$. Moreover, the relation between the coordinates on $T^{*}\widetilde{Q}$ relative to the basis ${\bf B}_{T^{*}\widetilde{Q}}$ and $\mathfrak{B}_{T^{*}\widetilde{Q}}$ is $p_{0}=\gamma_{1}M_{2}-\gamma_{2}M_{1},\quad p_{1}=\gamma_{1}M_{1}+\gamma_{2}M_{2},\quad p_{2}=M_{3},$ Therefore, adding $p_{3}=M_{1}^{2}+M_{2}^{2}$, we conclude that the coordinates $(\gamma_{3},p_{0},p_{1},p_{2},p_{3})$ on $\mathcal{M}/G$ are the same coordinates used in [23, 21]. 2. $(ii)$ It is straightforward to write the equations of motion on $\widetilde{\mathcal{M}}/G$ in the variables $(\gamma_{3},p_{0},p_{1},p_{2})$ for the reduced hamiltonian $H_{\mbox{\tiny{red}}}$ recovering the equations in [23, 21]. This equations can be used to check the results in this section, however we stress that there is no need to compute them to find the horizontal gauge momenta, nor to study the integrability or the hamiltonization of the system. 3. $(iii)$ The Routh sphere, the ellipsoid rolling on a plane and the falling disk [21, 23, 14], are seen as particular cases of this example. $\diamond$ The horizontal gauge momenta as parallel sections. Let us consider the basis $\mathfrak{B}_{T\widetilde{Q}}=\left\\{X_{0},Y_{1},Y_{2},\right.$ $\left.Z_{1},Z_{2}\right\\}$ where $X_{0},Y_{1},Y_{2}$ are the vector fields defined previously but $Z_{1},Z_{2}$ generate the distribution $W$ which, now, is chosen to be $W=S^{\perp}\cap V$. The Christoffel symbols of $\hat{\nabla}$, in the basis $\mathfrak{B}_{T\widetilde{Q}}$ and $\mathfrak{B}_{\mathfrak{g}_{S}}$, are given by $\left(\\!\\!\begin{array}[]{c}\hat{\Gamma}_{01}^{1}\\\ \hat{\Gamma}_{01}^{2}\end{array}\\!\\!\right)=\frac{1}{2}[\kappa|_{S}]^{-1}\\!\\!\left(\\!\\!\\!\\!\begin{array}[]{c}\kappa^{\prime}_{11}(1-\gamma_{3}^{2})\\\ (\kappa^{\prime}_{12}+m\varrho B)(1-\gamma_{3}^{2})-H_{21}\end{array}\\!\\!\\!\\!\right)\mbox{ \ and \ }\left(\\!\\!\begin{array}[]{c}\hat{\Gamma}_{02}^{1}\\\ \hat{\Gamma}_{02}^{2}\end{array}\\!\\!\right)=\frac{1}{2}[\kappa|_{S}]^{-1}\\!\\!\left(\\!\\!\\!\\!\begin{array}[]{c}(\kappa^{\prime}_{12}+mAL)(1-\gamma_{3}^{2})-H_{12}\\\ \kappa^{\prime}_{22}(1-\gamma_{3}^{2})\end{array}\\!\\!\\!\\!\right),$ and $\hat{\Gamma}_{ij}^{1}=\hat{\Gamma}_{ij}^{2}=0$. Following Def. 3.20, the bilinear form $\Sigma=\Sigma^{1}\otimes\xi_{1}+\Sigma^{2}\otimes\xi_{2}$ is given by $\Sigma^{1}=-(\hat{\Gamma}_{0j}^{1}+R_{1j})X^{0}\wedge Y^{j}\quad\mbox{and}\quad\Sigma^{2}=-(\hat{\Gamma}_{0j}^{2}+R_{2j})X^{0}\wedge Y^{j},$ where the functions $R_{ij}$ are given in (5.43). Then, the horizontal gauge symmetries can be seen as parallel sections along the dynamics with respect to the $\Sigma$-connection: $\overset{\textit{\tiny{$\Sigma$}}}{\nabla}_{\dot{\gamma}}\zeta=0.$ ### 5.3 A homogeneous ball on a surface of revolution Let us consider the holonomic system formed by a homogeneous sphere of mass ${\bf m}$ and radius $r>0$, which center $C$ is constrained to belong to a convex surface of revolution $\Sigma$ (i.e., the ball rolls on the surface $\tilde{\Sigma}$, see Figure 3). The surface $\Sigma$ is obtained by rotating about the $z$-axis the graph of a convex and smooth function $\phi:\mathbb{R}_{+}\longrightarrow\mathbb{R}$. Thus, $\Sigma$ is described by the equation $z=\phi(x^{2}+y^{2})$. To guarantee smoothness and convexity of the surface, we assume that $\phi$ verifies that $\phi^{\prime}(0^{+})=0$, $\phi^{\prime}(s)>0$ and $\phi^{\prime\prime}(s)>0$, when $s>0$. To ensure that the ball has only one contact point with the surface we ask the curvature of $\phi(s)$ to be at most 1/r. The configuration manifold $Q$ is $\mathbb{R}^{2}\times SO(3)$ with coordinates $(x,y,g)$ where $G$ is the orthogonal matrix fixing the attitude of the sphere and $(x,y)$ are the coordinates of $C$ with respect to a reference frame with origin $O$ and $z$-axis coinciding with the figure axis of $\Sigma$. $\widetilde{\Sigma}$$\Sigma$$x$$y$$z$$O$$C$$\vec{n}$$P$$\bf g$ Figure 3: The homogeneous ball on a convex surface of revolution. Let us denote by $n=n(x,y)$ the outward normal unit vector to $\Sigma$ with components $(n_{1},n_{2},n_{3})$ given by $\frac{n_{1}}{n_{3}}=2x\phi^{\prime},\quad\frac{n_{2}}{n_{3}}=2y\phi^{\prime}\quad\mbox{and}\quad n_{3}=-\frac{1}{\sqrt{1+4(x^{2}+y^{2})(\phi^{\prime})^{2}}}.$ If $\omega=(\omega_{1},\omega_{2},\omega_{3})$ is the angular velocity of the ball in the space frame, then the Lagrangian of the holonomic system on $TQ$ is $L(x,y,g,\dot{x},\dot{y},\omega)=\frac{{\bf m}}{2n_{3}^{2}}\left((1-n_{2}^{2})\dot{x}^{2}+2n_{1}n_{2}\,\dot{x}\dot{y}+\dot{y}^{2}(1-n_{1}^{2})\right)+\frac{1}{2}\langle\mathbb{I}\omega,\omega\rangle-{\bf m}{\bf g}\phi\,,$ (5.46) where ${\bf g}$ denotes the gravity acceleration and $\mathbb{I}$ the moment of inertia of the sphere with respect to its center of mass. Geometry of the constrained system. The ball rotates without sliding on the surface $\widetilde{\Sigma}$, and hence the nonholonomic constraints equations are $\dot{x}=-r\left(\omega_{2}n_{3}-\omega_{3}n_{2}\right)\,,\qquad\dot{y}=-r\left(\omega_{3}n_{1}-\omega_{1}n_{3}\right).$ We denote by $\\{X_{1}^{R},X_{2}^{R},X_{3}^{R}\\}$ the right invariant vector fields on $SO(3)$ and by $\\{\rho_{1},\rho_{2},\rho_{3}\\}$ the right Maurer- Cartan 1-forms, that form a basis of $T^{*}SO(3)$ dual to $\\{X_{1}^{R},X_{2}^{R},X_{3}^{R}\\}$. Then the constraint 1-forms are given by $\epsilon^{1}:=dx-r\left(n_{2}\rho_{3}-n_{3}\rho_{2}\right)\,,\qquad\epsilon^{2}:=dy-r\left(n_{3}\rho_{1}-n_{1}\rho_{3}\right)\,.$ The constraint distribution $D$ defined by the annihilator of $\epsilon^{1}$ and $\epsilon^{2}$ has fiber, at $q=(x,y,g)$, given by $D_{q}=\textrm{span}\left\\{Y_{x}:=\partial_{x}-\frac{1}{rn_{3}}(n_{2}X_{n}-X_{2}^{R}),\ Y_{y}:=\partial_{y}+\frac{1}{rn_{3}}(n_{1}X_{n}-X_{1}^{R}),\ X_{n}\right\\}\,,$ (5.47) where $X_{n}:=n_{1}\,X_{1}^{R}+n_{2}\,X_{2}^{R}+n_{3}\,X_{3}^{R}$. Consider the basis of $TQ$ ${\bf B}_{TQ}=\left\\{Y_{x},Y_{y},X_{n},Z_{1},Z_{2}\right\\}\,,$ (5.48) where $Z_{1}:=\frac{1}{rn_{3}}X_{2}^{R}-\frac{n_{2}}{rn_{3}}X_{n}$ and $Z_{2}:=-\frac{1}{rn_{3}}X_{1}^{R}+\frac{n_{1}}{rn_{2}}X_{n}$ with associated coordinates $(\dot{x},\dot{y},\omega_{n},w^{1},w^{2})$, for $\omega_{n}=n\cdot\omega=n_{i}\omega_{i}$, the normal component of the angular velocity $\omega$. The dual frame of (5.48) is ${\bf B}_{T^{*}Q}=\left\\{dx,dy,\rho_{n},\epsilon^{1},\epsilon^{2},\right\\}\,,$ (5.49) where $\rho_{n}=n_{i}\rho_{i}$, with associated coordinates $(p_{x},p_{y},p_{n},M_{1},M_{2})$ on $T^{*}Q$. The manifold $\mathcal{M}=\kappa^{\sharp}(D)$ is given by $\mathcal{M}=\left\\{(x,y,g;p_{x},p_{y},p_{n},M_{1},M_{2})\ :\ M_{1}=\tfrac{-I}{I+mr^{2}}p_{x},\ M_{2}=\tfrac{-I}{I+mr^{2}}p_{y}\right\\}.$ The symmetries. Consider the action $\Psi$ of the Lie group $G=SO(2)\times SO(3)$ on the manifold $Q$ given, at each $(x,y,g)\in Q$ and $(h_{\theta},h)\in SO(2)\times SO(3)$, by $\Psi_{(h_{\theta},h)}(x,y,g)=(h_{\theta}(x,y)^{t},\tilde{h}_{\theta}gh),$ where $\tilde{h}_{\theta}$ is the $3\times 3$ rotational matrix of angle $\theta$ with respect to the $z$-axis. In other words, $SO(3)$ acts on the right on itself and $SO(2)$ acts by rotations about the figure axis of the surface $\Sigma$. The Lagrangian (5.46) and the constraints (5.47) are invariant with respect to the lift of this action to $TQ$ given by $\Psi_{(h_{\theta},h)}(x,y,g,\dot{x},\dot{y},\omega)=(h_{\theta}(x,y)^{t},\tilde{h}_{\theta}gh,h_{\theta}(\dot{x},\dot{y})^{t},\omega)$. The invariance of the kinetic energy and the constraints $D$ ensures that $\Psi$ restricts to an action on $\mathcal{M}$, that leaves the equations of motion invariant. The Lie algebra $\mathfrak{g}$ of $G$ is isomorphic to $\mathbb{R}\times\mathbb{R}^{3}$ with the infinitesimal generators $(1;{\bf 0})_{Q}=-y\partial_{x}+x\partial_{y}+X^{R}_{3}\quad\mbox{and}\quad(0;{\bf e}_{i})_{Q}=\alpha_{i}\,X_{1}^{R}+\beta_{i}\,X_{2}^{R}+\gamma_{i}\,X_{3}^{R},\textrm{ for }i=1,2,3,$ where ${\bf e}_{i}$ denotes the $i$-th element of the canonical basis of $\mathbb{R}^{3}$ and, $\alpha=(\alpha_{1},\alpha_{2},\alpha_{3})$, $\beta=(\beta_{1},\beta_{2},\beta_{3})$ $\gamma=(\gamma_{1},\gamma_{2},\gamma_{3})$ the rows of the matrix $g\in SO(3)$. Observe that $(1;{\bf 0})_{Q}$ is an infinitesimal generator of the $SO(2)$-action and the others are infinitesimal generators of the $SO(3)$-action. We then underline that the $G$-symmetry satisfies the dimension assumption and it is proper and free whenever $(x,y)\neq(0,0)$ (note that the rank of $V$ is 3 for $(x,y)=(0,0)$ and it is 4 elsewhere, showing that the action is not even locally free). Let us denote by $\widetilde{Q}\subset Q$ and $\widetilde{\mathcal{M}}\subset\mathcal{M}$ the manifolds where the $G$-action is free, i.e. $(x,y)\neq(0,0)$. The vertical distribution $S=D\cap V$ on $\widetilde{Q}$ has rank 2 with fibers $S_{q}=\textrm{span}\\{Y_{1}:=-yY_{x}+xY_{y},Y_{2}:=X_{n}\\}.$ The bundle $\mathfrak{g}_{S}\rightarrow Q$ has a global basis $\mathfrak{B}_{\mathfrak{g}_{S}}$ of sections given by $\mathfrak{B}_{\mathfrak{g}_{S}}=\left\\{\xi_{1}:=\left(1;\frac{x}{r\,n_{3}},\frac{y}{r\,n_{3}},0\right),\xi_{2}:=(0;n\,g)\right\\}$ and we check that $(\xi_{1})_{Q}=Y_{1}$ and $(\xi_{2})_{Q}=Y_{2}$. Finally we observe that $\widetilde{Q}/G$ has dimension 1 ($\rho_{\tilde{Q}}:\tilde{Q}\to\tilde{Q}/G$ is given by $\rho_{Q}(x,y,g)=x^{2}+y^{2}$) and hence the $G$-symmetry satisfies Conditions $(\mathcal{A}1)$-$(\mathcal{A}4)$ on $\widetilde{Q}$. The existence of horizontal gauge momenta. Using the basis (5.48) and the definition of $S$, we consider the decomposition $T\widetilde{Q}=H\oplus S\oplus W,$ where $W$ is a vertical complement of the constraints given by $W:=\textrm{span}\\{Z_{1},Z_{2}\\}$ and $H:=S^{\perp}\cap D$ is generated by $X_{0}:=xY_{x}+yY_{y}$. As in Example 5.2, in this case, it is enough (and straightforward using that $n_{3}(x,y)$ is rotational invariant and that $[X_{1}^{R},X_{2}^{R}]=-X_{3}^{R}$ for all cyclic permutations) to check that $\kappa([Y_{1},Y_{2}],Y_{1})=0$ and $\kappa([Y_{1},Y_{2}],Y_{2})=0$ to guarantee that the kinetic energy is strong invariant on $S$. Finally, we also see that $\kappa(X_{0},[Y_{i},X_{0}])=0$, for $i=1,2$ . Therefore, following Theorem 3.15, the system admits two $G$-invariant (functionally independent) horizontal gauge momenta ${\mathcal{J}}_{1}$ and ${\mathcal{J}}_{2}$, showing that the first integrals obtained in [52, 38, 59, 16, 27] can be obtained from the symmetry of the system as horizontal gauge momenta. The computation of the 2 horizontal gauge momenta. We now characterize the coordinate functions of the horizontal gauge symmetries written in the basis $\mathfrak{B}_{\mathfrak{g}_{S}}$ on $\widetilde{Q}$. That is, let us denote by $J_{1}:={\bf i}_{Y_{1}}\Theta=-yp_{x}+xp_{y}\qquad\mbox{and}\qquad J_{2}:={\bf i}_{Y_{2}}\Theta=p_{n}.$ Using the orbit projection $\rho_{\tilde{Q}}:\tilde{Q}\to\tilde{Q}/G$, a $G$-invariant function $f$ on $Q$ can be thought as depending on the variable $\tau=x^{2}+y^{2}$, i.e., $f=f(\tau)$. Following Theorem 3.15$(ii)$, a function $\mathcal{J}=f_{1}J_{1}+f_{2}J_{2}$ for $f_{1},f_{2}\in C^{\infty}(Q)^{G}$ is a horizontal gauge momenta if and only if $(f_{1},f_{2})$ is a solution of the linear system of ordinary differential equations on $\widetilde{Q}/G$, $R\left(\\!\\!\begin{array}[]{c}f_{1}\\\ f_{2}\end{array}\\!\\!\right)=\left(\\!\\!\begin{array}[]{c}\bar{X}_{0}(f_{1})\\\ \bar{X}_{0}(f_{2})\end{array}\\!\\!\right)\quad\mbox{where}\quad R=2\tau\left(\begin{array}[]{cc}0&-2\frac{rI}{E}n_{3}^{2}(2(\phi^{\prime})^{3}-\phi^{\prime\prime})\\\ \tfrac{A}{r}n_{3}^{2}&0\end{array}\\!\\!\right)$ (5.50) for $A=\phi^{\prime}+2\tau\phi^{\prime\prime}$ and $\bar{X}_{0}=T\rho_{\widetilde{Q}}(X_{0})=2\tau\tfrac{\partial}{\partial\tau}$. The matrix $R$ is computed using that $R=[\kappa|_{S}]^{-1}[N]$ where $[N]=\frac{2I}{r}\tau\left(\\!\\!\begin{array}[]{cc}0&-2\tau n_{3}^{2}(2(\phi^{\prime})^{3}-\phi^{\prime\prime})\\\ An_{3}^{2}&0\end{array}\\!\\!\right)\quad\mbox{and}\quad[\kappa|_{S}]=\left(\\!\\!\begin{array}[]{cc}\tfrac{E}{r^{2}}\tau&0\\\ 0&I\end{array}\\!\\!\right).$ Since this system admits two independent solutions $f^{1}=(f_{1}^{1},f_{2}^{2})$ and $f^{2}=(f_{1}^{2},f_{2}^{2})$ on $\widetilde{Q}/G$, then the nonholonomic system admits two $G$-invariant horizontal gauge momenta $\mathcal{J}_{1}$, $\mathcal{J}_{2}$ defined on $\widetilde{\mathcal{M}}$ of the form $\mathcal{J}_{1}=f_{1}^{1}J_{1}+f_{1}^{2}J_{2}\qquad\mbox{and}\qquad\mathcal{J}_{2}=f_{1}^{2}J_{1}+f_{2}^{2}J_{2}.$ (5.51) recalling that $J_{1}=-yp_{x}+xp_{y}$ and $J_{2}=p_{n}$ ###### Remark 5.5. Let us denote by $\bar{\mathcal{J}}_{1}$, $\bar{\mathcal{J}}_{2}$ the functions on $\widetilde{\mathcal{M}}/G$ associated to (5.51). * $(i)$ The (reduced) first integrals $\bar{\mathcal{J}}_{1}$, $\bar{\mathcal{J}}_{2}$ can be extended by continuity to the differential space $\mathcal{M}/G$ and thus $\mathcal{J}_{1}$, $\mathcal{J}_{2}$ are $G$-invariant functions on $\mathcal{M}$ (see [27] for details) and in this case we say that the system admits $2=\textup{rank}(\mathfrak{g}_{S})$ horizontal gauge momenta. * $(ii)$ The system of differential equations (5.50) can be written as $R_{1}f_{2}=f_{1}^{\prime}\qquad\mbox{and}\qquad R_{2}f_{1}=f_{2}^{\prime},$ where $R_{1}=R_{1}(\tau)=-2\tfrac{rI}{E}n_{3}^{2}(2(\phi^{\prime})^{3}-\phi^{\prime\prime})$ and $R_{2}=R_{2}(\tau)=\tfrac{A}{r}n_{3}^{2}$. Hence $\bar{\mathcal{J}}_{1}$, $\bar{\mathcal{J}}_{2}$ are first integrals of Routh type found in [38] (see also [23, 59, 16, 54]) and shown to be horizontal gauge momenta in [27, 29]. $\diamond$ Integrability and reconstruction. The reduced integrability of this system was established in [52] and its complete broad integrability has been extensively studied in [38, 59, 16, 27, 26], using the existence of first integrals $\mathcal{J}_{1}$ and $\mathcal{J}_{2}$, without relating their existence to the symmetry group. The symmetry origin of $\mathcal{J}_{1}$ and $\mathcal{J}_{2}$ was announced in [9], and then proved in [54, 29]. Here we want to stress how Theorem 3.15 can be applied and therefore the reduced integrability of the system is ensured. That is, $\bar{\mathcal{J}}_{1}$, $\bar{\mathcal{J}}_{2}$, $H_{\mbox{\tiny{red}}}$ are first integrals of the reduced dynamics $X_{\mbox{\tiny{red}}}$ defined on the manifold $\tilde{\mathcal{M}}/G$ of dimension 4. Moreover, as proved in [38, 59] the reduced dynamics is made of periodic motions or of equilibria, and hence, since the symmetry group is compact, the complete dynamics is generically quasi-periodic on tori of dimension 3 (see Theorem 4.12 and [38, 26]). Indeed one could can say more on the geometric structure of the phase space $\widetilde{\mathcal{M}}$ of the complete system, it is endowed with the structure of a fibration on tori of dimension at most 3 (see [26] for a detailed study of the geometry of the complete system on $\widetilde{\mathcal{M}}$). Hamiltonization. Even though the hamiltonization of this example has been studied in [8], in this section we see the hamiltonization as a consequence of Theorem 3.15 and how the resulting Poisson bracket on $\widetilde{\mathcal{M}}/G$ depends on the linear system of ordinary differential equations (5.50). By Theorem 4.5, the nonholonomic system is hamiltonizable by a gauge transformation; that is, on $\widetilde{\mathcal{M}}/G$ the reduced nonholonomic system is described by a Poisson bracket with 2-dimensional leaves given by the common level sets of the horizontal gauge momenta $\bar{\mathcal{J}}_{1}$, $\bar{\mathcal{J}}_{2}$, induced by (5.51), (recall that $\bar{\mathcal{J}}_{i}$ are the functions on $\widetilde{\mathcal{M}}/G$, such that $\rho^{*}(\bar{\mathcal{J}}_{i})={\mathcal{J}}_{i}$). Following Theorem 4.10, we compute the 2-form $B_{\mbox{\tiny{HGS}}}$, defining the dynamical gauge transformation, using the momentum equation (5.50). Since $dY^{1}|_{D}=0$, then $B_{\mbox{\tiny{HGS}}}:=\langle J,\mathcal{K}_{\mbox{\tiny{$\mathcal{W}$}}}\rangle- p_{1}R_{12}{\mathcal{X}}^{0}\wedge{\mathcal{Y}}^{2}+p_{2}R_{21}X^{0}\wedge{\mathcal{Y}}^{1}+p_{2}d\mathcal{Y}^{2},$ (5.52) where $\mathcal{X}^{0}=\tau_{\mbox{\tiny{$\mathcal{M}$}}}^{*}X^{0}$ and $\mathcal{Y}^{i}=\tau_{\mbox{\tiny{$\mathcal{M}$}}}^{*}Y^{i}$. That is, $\begin{split}\langle J,\mathcal{K}_{\mbox{\tiny{$\mathcal{W}$}}}\rangle|_{\mathcal{C}}&=M_{1}\,d\epsilon^{1}|_{\mathcal{C}}+M_{2}\,d\epsilon^{2}|_{\mathcal{C}},\\\ &=-\frac{Ir}{E(x^{2}+y^{2})}\left(p_{1}(\tfrac{1}{rn_{3}^{2}}+2n_{3}^{2}A){\mathcal{X}}^{0}\wedge{\mathcal{Y}}^{2}+p_{0}n_{3}(2\phi^{\prime}n_{3}+\tfrac{1}{r}{\mathcal{Y}}^{1}\wedge{\mathcal{Y}}^{2}\right)|_{\mathcal{C}},\end{split}$ and using that $d{\mathcal{Y}}^{2}|_{\mathcal{C}}=\frac{(x^{2}+y^{2})}{n_{3}}p_{2}X^{0}\wedge{\mathcal{Y}}^{1}|_{\mathcal{C}}$ we obtain $B_{\mbox{\tiny{HGS}}}=(x^{2}+y^{2})p_{2}(\tfrac{1}{n_{3}}+2\tfrac{A}{r}n_{3}^{2}){\mathcal{X}}^{0}\wedge{\mathcal{Y}}^{1}+\tfrac{rI}{E}(\tfrac{1}{rn_{3}}+2\phi^{\prime})(p_{1}{\mathcal{X}}^{0}\wedge{\mathcal{Y}}^{2}-p_{0}n_{3}^{2}{\mathcal{Y}}^{1}\wedge{\mathcal{Y}}^{2}).$ (5.53) ###### Remark 5.6. Since the action is not free, $\mathcal{M}/G$ is a semialgebraic variety that consists in two strata: a singular 1-dimensional stratum corresponding to the points in which the action is not free; and the four dimensional regular stratum $\widetilde{\mathcal{M}}/G$ (where the action is free). Moreover, analyzing the change of coordinates between ${\bf B}_{T^{*}Q}$ and ${\mathfrak{B}}_{T^{*}Q}$ we get $\tau=x^{2}+y^{2},\ p_{0}=xp_{x}+yp_{y},\ p_{1}=-yp_{x}+xp_{y},\ p_{2}=p_{n},$ and adding $p_{3}=p_{x}^{2}+p_{y}^{2}$ we recover the coordinates used in [38, 27] on $\widetilde{\mathcal{M}}/G$. $\diamond$ ###### Remark 5.7. Since the convexity of the function $\phi$ that parametrizes the surface $\Sigma$ is not strictly used, this example also describes the geometry and dynamics of a homogeneous ball rolling on surface of revolution such that its normal vector fields has $n_{3}\neq 0$. $\diamond$ ### 5.4 Comments on the hypothesis of Theorem 3.15: examples and counterexamples Theorem 3.15 shows that a nonholonomic system with symmetries satisfying certain hypotheses admits the existence of $k$ functionally independent $G$-invariant horizontal gauge momenta. Next, assuming Conditions $(\mathcal{A}1)$-$(\mathcal{A}3)$, we study what may happen if the other hypotheses of Theorem 3.15 are not satisfied. In particular we study three cases: when the metric is not strong invariant, when $\kappa(X_{0},[X_{0},Y])$ is different from zero, and finally when Condition $(\mathcal{A}4)$ is not verified (i.e., $\textup{dim}(Q/G)\neq 1$). For each case we give examples and counterexamples to illustrate our conclusions. #### Analyzing the strong invariance condition and $\kappa(X_{0},[X_{0},Y])=0$ Consider a nonholonomic system $(\mathcal{M},\Omega_{\mbox{\tiny{$\mathcal{M}$}}}|_{\mathcal{C}},H_{\mbox{\tiny{$\mathcal{M}$}}})$ with a $G$-symmetry satisfying Conditions $(\mathcal{A}1)$-$(\mathcal{A}4)$. Suppose that $(f_{1},...,f_{k})$ is a solution of the system of differential equations (3.22), then, from (3.20), we observe that $\mathcal{J}=f_{j}J_{i}$ is a horizontal gauge momentum if and only if $f_{i}\kappa(X_{0},[Y_{i},X_{0}])=0\quad\mbox{and}\quad f_{i}(\kappa(Y_{j},[Y_{i},Y_{l}])+\kappa(Y_{l},[Y_{i},Y_{j}))=0,\mbox{ for each }j,l.$ for a $S$-orthogonal horizontal space $H$. That is, in some cases, even if $\kappa(X_{0},[X_{0},Y_{i_{0}}])\neq 0$ for some $Y_{i_{0}}\in\Gamma(S)$ or the metric is not strong invariant, we may still have a horizontal gauge momentum. We now present two examples that show the main features of these phenomenon. The metric is not strong invariant on $S$. The following is a mathematical example, that has the property that the metric is not strong invariant, and it admits only 1 horizontal gauge momenta even though the rank of the distribution $S$ is 3. Precisely, consider the nonholonomic system on the manifold $Q=\mathbb{R}^{3}\times SE(2)$ with coordinates $(u,v,x)\in\mathbb{R}^{3}$ and $(y,z,\theta)\in SE(2)$ with Lagrangian given by $L(q,\dot{q})=\frac{1}{2}\left(u^{2}+v^{2}+\dot{x}^{2}+\dot{y}^{2}+\dot{z}^{2}+\dot{\theta}^{2}+4(\sin\theta\,\dot{z}+\cos\theta\,\dot{y})\dot{\theta}\right),$ and constraints 1-forms given by $\epsilon^{u}=du-(1+\cos x)d\theta\qquad\mbox{and}\qquad\epsilon^{v}=dv-\sin xd\theta.$ The symmetry is given by the action of the Lie group $G=\mathbb{R}^{2}\times SE(2)$ defined, at each $(a,b;c,d,\beta)\in G$, by $\Psi((a,b;c,d,\beta),(u,v,x,y,z,\theta))=(u+a,v+b,x,h_{\beta}\left(\\!\begin{matrix}y\\\ z\end{matrix}\\!\right)+\left(\\!\begin{matrix}c\\\ d\end{matrix}\\!\right),\theta+\beta),$ where $h_{\beta}$ is the $2\times 2$ rotational matrix of angle $\beta$. The distribution $S=D\cap V$ is generated by the $G$-invariant vector fields $\\{Y_{\theta},Y_{1},Y_{2}\\}$ given by $Y_{\theta}:=\partial_{\theta}+(1+\cos x)\partial_{u}+\sin x\partial_{v},\,Y_{1}:=\cos\theta\partial_{y}+\sin\theta\partial_{z},\,Y_{2}:=-\sin\theta\partial_{y}+\cos\theta\partial_{z},$ and $X_{0}=\partial_{x}$ generates $H=S^{\perp}\cap D$. It is straightforward to check that Conditions $(\mathcal{A}1)$-$(\mathcal{A}4)$ are satisfied and that $\kappa(X_{0},[X_{0},Y])=0$ for all $Y\in\Gamma(S)$. However, the metric is not strong invariant on $S$: $\kappa(Y_{2},[Y_{\theta},Y_{1}])=1$ and $\kappa(Y_{\theta},[Y_{1},Y_{2}])=0$. From (3.20), we can observe that $\mathcal{J}=2p_{1}+p_{\theta}$ is the only horizontal gauge momentum of the system in spite of the rank of $S$ being 3 (where, as usual, $p_{1}={\bf i}_{Y_{1}}\Theta_{\mbox{\tiny{$\mathcal{M}$}}}$ and $p_{\theta}={\bf i}_{Y_{\theta}}\Theta_{\mbox{\tiny{$\mathcal{M}$}}}$). Dropping condition $\kappa(X_{0},[X_{0},Y])=0$. We illustrate with a multidimensional nonholonomic particle the different scenarios obtained when $\kappa(X_{0},[X_{0},Y])\neq 0$ for a section $Y\in\Gamma(S)$ (see Table 5.4). Consider the nonholonomic system on $\mathbb{R}^{5}$ with Lagrangian $L(q,\dot{q})=\frac{1}{2}\dot{q}\cdot\kappa\,\dot{q}-V(x_{1})$, where $\kappa$ is the kinetic energy metric $\kappa=\begin{pmatrix}1&0&1&0&1\\\ 0&1&0&0&0\\\ 1&0&1&0&0\\\ 0&0&0&1&1\\\ 1&0&0&1&1\\\ \end{pmatrix},$ and with the nonintegrable distribution $D$ given, at each $q=(x_{1},\ldots,x_{5})\in\mathbb{R}^{5}$, by $\displaystyle D_{q}=\textrm{span}\\{$ $\displaystyle D_{1}=f(x_{1})\,\partial_{x_{1}}+b(x_{1})\,\partial_{x_{3}}+c(x_{1})\,\partial_{x_{4}}\,,D_{2}=h(x_{1})\,\partial_{x_{1}}+g(x_{1})\,\partial_{x_{2}}\,,$ $\displaystyle D_{3}=d(x_{1})\,\partial_{x_{1}}+j(x_{1})\,\partial_{x_{4}}+l(x_{1})\,\partial_{x_{5}}\\}\,,$ where $b(x_{1}),c(x_{1}),d(x_{1}),f(x_{1}),g(x_{1}),h(x_{1}),j(x_{1}),l(x_{1})$ are functions on $\mathbb{R}^{5}$ depending only on the coordinate $x_{1}$. The group $\mathbb{R}^{4}$ of translations along the $x_{2}$, $x_{3}$, $x_{4}$ and $x_{5}$ directions acts on the system and leaves both the Lagrangian and the nonholonomic constraints invariant. It is straightforward to see that this $G$-symmetry satisfies Conditions $(\mathcal{A}1)$-$(\mathcal{A}4)$. The fiber of the distribution $S$ over $q\in Q$ is $S_{q}=\textrm{span}\\{Y_{1}:=f(x_{1})D_{2}-h(x_{1})D_{1}\,,Y_{2}:=h(x_{1})D_{3}-d(x_{1})D_{2}\\}$. Since the translational Lie group $\mathbb{R}^{4}$ is abelian then the kinetic energy is strong invariant on $V$ (see Example 3.10). The distribution $H=S^{\perp}\cap D$ is generated by the vector field $X_{0}=\beta_{1}(x_{1})\,D_{1}+\beta_{2}(x_{1})\,D_{2}+\beta_{3}(x_{1})\,D_{3}$, for $\beta_{1}$, $\beta_{2}$ and $\beta_{3}$ suitable functions (defined on $\mathbb{R}^{5}$ but depending only on the coordinate $x_{1}$). For particular choices of the functions $b(x_{1}),c(x_{1}),d(x_{1}),f(x_{1}),g(x_{1}),h(x_{1}),j(x_{1}),l(x_{1})$ the two terms $\kappa(X_{0},[Y_{1},X_{0}])$ and $\kappa(X_{0},[Y_{2},X_{0}])$ may not vanish. The computations and their expression are rather long and were implemented with Mathematica. The next table shows different situations that we obtain: multidimensional nonholonomic particle ($\textup{rank}(S)=2$) --- behaviour of $\kappa(X_{0},[X_{0},Y])$ | $\sharp$ horizontal gauge momenta $\kappa(X_{0},[Y_{1},X_{0}])=0$ and $\kappa(X_{0},[Y_{2},X_{0}])\neq 0$ | 0 $\kappa(X_{0},[Y_{1},X_{0}])=0$ and $\kappa(X_{0},[Y_{2},X_{0}])\neq 0$ | 1 $\kappa(X_{0},[Y_{1},X_{0}])\neq 0$ and $\kappa(X_{0},[Y_{2},X_{0}])\neq 0$ | 0 #### Cases when Condition $(\mathcal{A}4)$ is not satisfied (or $\textup{rank}(H)\neq 1$) When Condition $(\mathcal{A}4)$ is not verified, it is still possible to work with the momentum equation stated in Proposition 3.3. Basically, for the case when $\textup{rank}(H)=0$ we still have $\textup{rank}(S)$ horizontal gauge momenta, while if $\textup{rank}(H)>1$ we cannot say anything. If $\textup{rank}(H)=0$. In this case, $TQ=V$ which means that $Q\simeq G$. That is, consider a nonholonomic system $(L,D)$ on a Lie group $G$ for which the left action is a symmetry of the system. Since the only $G$-invariant functions are constant, we need to check that, for a basis $\mathfrak{B}_{\mathfrak{g}_{S}}=\\{\xi_{1},...,\xi_{k}\\}$ of $\Gamma(\mathfrak{g}_{S})$, the momentum equation (3.17) is satisfied only for constant functions $f_{i}=c_{i}$. In this case, since $X_{\mbox{\tiny{nh}}}\in\Gamma(\mathcal{V})$, the coordinate momentum equation (3.20), for $f\in C^{\infty}(Q)^{G}$, remains $f_{i}v^{l}v^{j}\kappa(Y_{j},[Y_{i},Y_{l}])=0.$ The constant functions $f_{i}=c_{i}$ are $k$ (independent) solutions of the momentum equation if and only if the kinetic energy is strong invariant on $S$, and hence the sections of the basis $\mathfrak{B}_{\mathfrak{g}_{S}}$ are horizontal gauge symmetries. As illustrative examples, see the vertical disk and the Chaplygin sleigh in [28] and [11] respectively. If $\textup{rank}(H)>1$. In this case we cannot assert the existence of a global basis of $H$. However, in some examples the horizontal space $H$ may admit a global basis which we denoted by $\\{X_{1},...,X_{n}\\}$ for $n=\textup{rank}(H)$. In this case, we observe that the second summand of the momentum equation (3.20) gives the condition $\kappa(X_{\alpha},[Y_{i},X_{\beta}])-\kappa(X_{\beta},[Y_{i},X_{\alpha}])=0\quad\mbox{for all }\alpha,\beta=1,...,n$ and the third summand gives a system of partial differential equations whose solutions induce the horizontal gauge momenta. As an illustrative example, we can work out the Chaplygin ball [19, 24]: this example has a $G$-symmetry so that $\textup{rank}(S)=1$ and $\textup{rank}(H)=2$ with a global basis (see e.g. [36, 3]). However, working with the momentum equation (3.17), it is possible to show that the system admits 1 horizontal gauge momentum, recovering the known result in [19, 24, 15]. ## Appendix A Appendix: Almost Poisson brackets and gauge transformations Almost Poisson brackets. An almost Poisson bracket on a manifold $M$ is a bilinear bracket $\\{\cdot,\cdot\\}:C^{\infty}(M)\times C^{\infty}(M)\to C^{\infty}(M)$ that is skew-symmetric and satisfies Leibniz identity (but does not necessarily satisfy Jacobi identity). Due to the bilinear property, an almost Poisson bracket induces a bivector field $\pi$ on $M$ defined, for each $f,g\in C^{\infty}(M)$ by $\pi(df,dg)=\\{f,g\\}.$ The vector field $X_{f}:=\\{\cdot,f\\}$ is the hamiltonian vector field of $f$. Equivalently, $X_{f}=-\pi^{\sharp}(df)$, where $\pi^{\sharp}:T^{*}M\to TM$ is the map such that for $\alpha,\beta\in T^{*}M$, $\beta(\pi^{\sharp}(\alpha))=\pi(\alpha,\beta)$. The characteristic distribution of the bracket $\\{\cdot,\cdot\\}$ is the distribution on $M$ generated by the hamitonian vector fields. An almost Poisson bracket $\\{\cdot,\cdot\\}$ is Poisson when the Jacobi identity is satisfied, i.e., $\\{f,\\{g,h\\}\\}+\\{g,\\{h,f\\}\\}+\\{h,\\{f,g\\}\\}=0,\qquad\mbox{for }f,g,h\in C^{\infty}(M).$ Equivalently, a bivector field $\pi$ is Poisson if and only if $[\pi,\pi]=0$ where $[\cdot,\cdot]$ is the Schouten bracket, see e.g. [46]. The characteristic distribution of a Poisson bracket is integrable and foliated by symplectic leaves. ###### Definition A.1. [56] An almost Poisson bracket $\\{\cdot,\cdot\\}$ on $M$ is twisted Poisson if there exists a closed 3-form $\Phi$ on $M$ such that, for each $f,g,h\in C^{\infty}(M)$ $\\{f,\\{g,h\\}\\}+\\{g,\\{h,f\\}\\}+\\{h,\\{f,g\\}\\}=\Phi(X_{f},X_{g},X_{h}),$ where $X_{f},X_{g},X_{h}$ are the hamiltonian vector fields of $f,g,h,$ with respect to $\\{\cdot,\cdot\\}$. In other words, a bivector field $\pi$ on $M$ is twisted Poisson if $[\pi,\pi]=\frac{1}{2}\pi^{\sharp}(\Phi)$. ###### Remark A.2. The characteristic distribution of a twisted Poisson bracket is integrable and it is foliated by almost symplectic leaves. Conversely, it was shown in [6], that any regular almost Poisson bracket with integrable characteristic distribution is a twisted Poisson bracket. $\diamond$ A regular almost Poisson bracket $\\{\cdot,\cdot\\}$ on $M$ is determined by a 2-form $\Omega$ and a distribution $F$ defined on $M$ so that $\Omega|_{F}$ is nondegenerate. In fact, for $f\in C^{\infty}(M)$, $X_{f}=\\{\cdot,f\\}\quad\mbox{if and only if}\quad{\bf i}_{X_{f}}\Omega|_{F}=df|_{F},$ (A.54) (actually, the bracket is determined by the nondegenerate 2-section $\Omega|_{F}$ on $M$). The distribution $F$ is the characteristic distribution of the bracket. If $F$ is integrable, then $\\{\cdot,\cdot\\}$ is a (regular) twisted Poisson bracket by the 3-form $\Phi=d\Omega$ ($\Omega$ is not necessarily closed). A Poisson bracket has $F$ integrable and $\Omega$ closed. Gauge transformations of a (regular) bracket by a 2-form. ###### Definition A.3. [56] Consider a (regular) bracket $\\{\cdot,\cdot\\}$ on the manifold $M$ as in (A.54) and a 2-form $B$ satisfying that $(\Omega+B)|_{F}$ is nondegenerate. A gauge transformation of $\\{\cdot,\cdot\\}$ by the 2-form $B$ defines a bracket $\\{\cdot,\cdot\\}_{\mbox{\tiny{$B$}}}$ on $M$ given, at each $f\in C^{\infty}(M)$, by ${\bf i}_{X_{f}}(\Omega+B)|_{F}=df|_{F}\quad\mbox{if and only if}\quad X_{f}=\\{\cdot,f\\}_{\mbox{\tiny{$B$}}}.$ In this case, we say that the brackets $\\{\cdot,\cdot\\}$ and $\\{\cdot,\cdot\\}_{\mbox{\tiny{$B$}}}$ are gauge related . ###### Remark A.4. 1. $(i)$ The brackets $\\{\cdot,\cdot\\}$ and $\\{\cdot,\cdot\\}_{\mbox{\tiny{$B$}}}$ have the same characteristic distribution $F$. Therefore, if an almost Poisson bracket has a nonintegrable characteristic distribution, all gauge related brackets will be almost Poisson with a nonintegrable characteristic distribution. 2. $(ii)$ If the bracket $\\{\cdot,\cdot\\}$ is twisted Poisson by a 3-form $\Phi$, then the gauge related bracket $\\{\cdot,\cdot\\}_{\mbox{\tiny{$B$}}}$ is twisted Poisson by the 3-form $(\Phi+dB)$. Moreover, they share the characteristic foliation but the 2-form on each leaf $F_{\mu}$ changes by the term $B_{\mu}=\iota_{\mu}B$ for $\iota_{\mu}:F_{\mu}\to M$ the inclusion. 3. $(iii$) The original definition of a gauge transformation in [56] was given on Dirac structures and then the 2-form $B$ does not need to satisfy the nondegenerate condition $(\Omega+B)|_{F}$. $\diamond$ ###### Definition A.5. Let $\tau:M\to P$ be a vector bundle and let $\alpha$ be a $k$-form on the manifold $M$. We say that $\alpha$ is semi-basic with respect to the bundle $M\to P$ if ${\bf i}_{X}\alpha=0\quad\mbox{for all $X\in TM$ such that $T\tau(X)=0$}.$ The $k$-form $\alpha$ is basic if there exists a $k$-form $\bar{\alpha}$ on $P$ such that $\tau^{*}\bar{\alpha}=\alpha$. ###### Remark A.6. Consider the canonical symplectic 2-form $\Omega_{\mbox{\tiny{$Q$}}}$ on $T^{*}Q$. If $B$ is a semi-basic 2-form with respect to the bundle $T^{*}Q\to Q$, then $\Omega_{\mbox{\tiny{$Q$}}}+B$ is a nondegenerate 2-form on $T^{*}Q$. $\diamond$ Symmetries. Let us consider an almost Poisson manifold $(M,\\{\cdot,\cdot\\})$ given as in (A.54) and a Lie group $G$ acting on $M$ and leaving $\\{\cdot,\cdot\\}$ invariant. Then on the reduced manifold $M/G$ there is an almost Poisson bracket $\\{\cdot,\cdot\\}_{\mbox{\tiny{red}}}$ defined, at each $f,g\in C^{\infty}(M/G)$ by $\\{f,g\\}_{\mbox{\tiny{red}}}\circ\rho=\\{\rho^{*}f,\rho^{*}g\\},$ where $\rho:M\to M/G$ is the orbit projection. If a $G$-invariant 2-form $B$ satisfies that $(\Omega+B)|_{F}$ is nondegenerate, then the gauge related bracket $\\{\cdot,\cdot\\}_{\mbox{\tiny{$B$}}}$ is $G$-invariant as well. Both brackets $\\{\cdot,\cdot\\}$ and $\\{\cdot,\cdot\\}_{\mbox{\tiny{$B$}}}$ can be reduced to obtain the corresponding reduced brackets $\\{\cdot,\cdot\\}_{\mbox{\tiny{red}}}$ and $\\{\cdot,\cdot\\}_{\mbox{\tiny{red}}}^{\mbox{\tiny{$B$}}}$ on the quotient manifold $M/G$ as the diagram shows: (A.55) As it was observe in [36, 6], the brackets $\\{\cdot,\cdot\\}_{\mbox{\tiny{red}}}$ and $\\{\cdot,\cdot\\}_{\mbox{\tiny{red}}}^{\mbox{\tiny{$B$}}}$ can have different properties. More precisely, they are not necessarily gauge related and hence one can be Poisson while the other not. In fact, $\\{\cdot,\cdot\\}_{\mbox{\tiny{red}}}$ and $\\{\cdot,\cdot\\}_{\mbox{\tiny{red}}}^{\mbox{\tiny{$B$}}}$ are gauge related if and only if the 2-form $B$ is basic with respect to the principal bundle $M\to M/G$. ## Appendix B Appendix: Some facts on reconstruction theory The reconstruction of the dynamics from reduced equilibria and reduced periodic orbits has been well studied in [35, 42], when the symmetry group is compact and in [2] in the non–compact case. In this subsection we shortly review the basic results of reconstruction theory in the simplest framework, of free and proper group actions. We consider a Lie group $G$ that acts freely and properly on a manifold $M$. The freeness and properness of the action guarantee that the quotient space $M/G$ has the structure of a manifold and $\tau:M\longrightarrow M/G$ is a principal bundle with structural group $G$. Let $X$ be a $G$–invariant vector field on $M$, then there exists a vector field $\bar{X}$ on $M/G$, which is $\tau$–related to $X$. We recall that a $G$–orbit $\mathcal{O}_{m_{0}}=G\cdot m_{0}$, with $m_{0}\in M$, is a relative equilibrium for $X$, if it is invariant with respect to the flow of $X$ and its projection to the reduced space $M/G$ is an equilibrium of the reduced dynamics $\bar{X}$. Moreover a $G$–invariant subset $\mathcal{P}$ of $M$ is called a relative periodic orbit for $X$, if it is invariant by the flow and its projection to the quotient manifold $M/G$ is a periodic orbit of $\bar{X}$. Let $\mathcal{P}$ be a relative periodic orbit and $\gamma$ a curve in $\mathcal{P}$. By the periodicity of the reduced dynamics, the integral curves of the complete system, that pass through $\gamma(0)$, returns periodically, with period $T>0$, to the $G$–orbit through $\gamma(0)$. The freeness of the action of $G$ on $M$ guarantees that $\forall\gamma$ in $\mathcal{P}$ there exists a unique $p(\hat{\gamma})$ in $G$ such that $\phi^{X}_{T}(\gamma)=\psi_{p(\hat{\gamma})}(\gamma)\,,$ where $\phi^{X}_{T}$ is the flow of $X$ at time $T$, $\psi_{g}$ is the action of $G$ on $M$, $\hat{\gamma}$ is the projection of $\gamma$ on $M/G$ with respect to $\tau$, and the map $p:\mathcal{P}\rightarrow G$, $\gamma\mapsto p=p(\hat{\gamma})$ is the so–called _phase_ [26]. The phase $p$ is a piecewise smooth map, constant along the orbits of $X$ (i.e. $p\circ\phi^{X}_{t}=p$, $\forall t$) and it is equivariant with respect to conjugation, that is $\quad p(h\cdot\gamma)=h\,p(\hat{\gamma})h^{-1},\quad\forall h\in G,\forall\gamma\in\mathcal{P}$. Then the following Theorem holds. ###### Proposition B.1. [35, 42, 2] Let $\mathcal{P}$ be a relative periodic orbit of $X$ on $M$. Then * i) if the group $G$ is compact, the flow of $X$ in $\mathcal{P}$ is quasi–periodic with at most $rank\,G+1$ frequencies; * ii) if $G$ is non–compact, the flow of $X$ in $\mathcal{P}$ is either quasi–periodic, or escaping. The non–compact case is the most frequent and also the most interesting, for example one could say more on which of the two behaviours of the dynamics, namely quasi-periodicity or escaping, is “generic” by studying the group $G$ (but this goes beyond our scopes, for more details see [2]). ###### Remark B.2. In [35, 42, 2] reconstructions results are given from the point of view of Lie Algebras, while [33] develops a theory in terms of groups. Moreover [33] investigates the structure of the copies of $\mathbb{R}$ and shows that one can define an intrinsic notion of a certain number of frequencies that gives rise to the idea that, in this case, the reconstructed dynamics ‘spirals’ toward a certain direction. $\diamond$ ## References * [1] C. Agostinelli, _Nuova forma sintetica delle equazioni del moto di un sistema anolonomo ed esistenza di un integrale lineare nelle velocità._ Boll. Un. Mat. Ital., 11 (1956), 1–9. * [2] P. Ashwin and I. Melbourne, Noncompact drift for relative equilibria and relative periodic orbits. Nonlinearity, 10 (1997), 595–616. * [3] P. Balseiro, The Jacobiator of Nonholonomic Systems and the Geometry of Reduced Nonholonomic Brackets. Arch. Ration. Mech. Anal. 214, ( 2014), 453–501. * [4] P. Balseiro, Hamiltonization of Solids of Revolution Through Reduction. J. Nonlinear Sci. 27 (2017), 2001–2035. * [5] P. Balseiro and O.E. Fernandez, Reduction of Nonholonomic Systems in Two Stages. Nonlinearity, Volume 28 (2015) 2873-2912. * [6] P. Balseiro and L. García-Naranjo, Gauge Transformations, Twisted Poisson Brackets and Hamiltonization of Nonholonomic Systems. Arch. Ration. Mech. Anal. 205 (2012), 267–310. * [7] P. Balseiro and N. Sansonetto, A Geometric Characterization of Certain First Integrals for Nonholonomic Systems with Symmetries. SIGMA 12 (2016), 018, 14pages. * [8] P. Balseiro and L.P. Yapu. Conserved quantities and hamiltonization of nonholonomic systems. * [9] L.M. Bates, H. Graumann and C. MacDonnell, Examples of gauge conservation laws in nonholonomic systems. _Rep. Math. Phys._ , 37 (1996), 295–308. * [10] L.M. Bates and J. Śniatycki, Nonholonomic reduction. Rep. Math. Phys. 32 (1993), 99–115. * [11] A.M. Bloch, Nonholonomic Mechanics and Controls. Interdisciplinary Applied Mathematics 24, Systems and Control. (Springer–Verlag, New York, 2003). * [12] A.M. Bloch, P.S. Krishnaprasad, J.E. Marsden, R.M. Murray, Nonholonomic mechanical systems with symmetry. Arch. Rational Mech. Anal. 136 (1996), 21–99. * [13] O. I. Bogoyavlenskij, Extended integrability and bi-Hamiltonian systems. Comm. Math. Phys., 196 (1998), 19–51. * [14] A.V. Bolsinov, A.A. Kilin, and A.O. Kazakov, Monodromy as an obstruction to Hamiltonization: pro or contra? J. Geom. Phys. 87 (2015), 61–75. * [15] A. V. Borisov and I. S. Mamaev, Chaplygin’s ball rolling problem is Hamiltonian. Math. Notes, 70 (2001), 793–795. * [16] A.V. Borisov, I.S. Mamaev and A.A. Kilin, Rolling of a ball on a surface. New integrals and hierarchy of dynamics. Regul. Chaotic Dyn., 7 (2002), 177–200. * [17] F. Cantrijn, M. de Leon, M. de Diego and J. Marrero, _Reduction of nonholonomic mechanical systems with symmetries_ , Rep. Math. Phys. 42 (1998), 25–45. * [18] F. Cantrijn, J. Cortés, M. de Leon and M. de Diego, On the geometry of generalized Chaplygin systems, Math. Proc. Cambridge Phil. Soc. 132 (2002), 323–351. * [19] S. A. Chaplygin, On a ball’s rolling on a horizontal plane. Regul. Chaotic Dyn., 7 (2002), 131–148; original paper in Mathematical Collection of the Moscow Mathematical Society, 24 (1903), 139–168. * [20] J. Cortés Monforte, Geometric, control and numerical aspects of nonholonomic systems. Lecture Notes in Mathematics 1793 (Springer-Verlag, Berlin, 2002). * [21] R.Cushman, Routh’s sphere. Rep. on Math. Phys. 42 (1998), 42–70. * [22] R. Cushman, D. Kemppeinen, J. Śniatycki and L. M. Bates, Geometry of nonholonomic constraints. Rep. Math. Phys., 36 (1995), 275–286. * [23] R. Cushman, J.J. Duistermaat and J. Śniatycki, Geometry of Nonholonomically Constrained Systems. Advanced Series in Nonlinear Dynamics 26, Singapore: World Scientific, 2010. * [24] J.J. Duistermaat, Chaplygin’s sphere. arXiv:math/0409019 (2004). * [25] F. Fassò and A. Giacobbe, Geometric structure of ”broadly integrable” Hamiltonian systems. J. Geom. Phys. 44 (2002), 156–170. * [26] F. Fassò and A. Giacobbe, Geometry of Invariant Tori of Certain Integrable Systems with Symmetry and an Application to a Nonholonomic System. SIGMA 3 (2008), 051. * [27] F. Fassò, A. Giacobbe and N. Sansonetto, Periodic flows, rank–two Poisson structures, and Nonholonomic systems. Reg. Ch. Dyn. 10 (2005), 267–284. * [28] F. Fassò, A. Giacobbe and N. Sansonetto, _Gauge conservation laws and the momentum equation in nonholonomic mechanics._ Rep. Math. Phys. 62 No. 3 (2008), 345–367. * [29] F. Fassò, A. Giacobbe, N. Sansonetto, _On the number of weakly Noetherian constants of motion of nonholonomic systems._ J. Geom. Mech. 1 (2009) 389–416. * [30] F. Fassò and N. Sansonetto, An Elemental Overview of the Nonholonomic Noether Theorem. Int. J. Geom. Methods Mod. Phys. 6 (2010), 1343–1355. * [31] F. Fassò, A. Giacobbe, N. Sansonetto, Linear weakly Noetherian constants of motion are horizontal gauge momenta. J. Geom. Mech. 4 (2012) 129–136. * [32] F. Fassò, A. Ramos, N. Sansonetto, The reaction-annihilator distribution and the nonholonomic Noether theorem for lifted actions. Reg. Ch. Dyn. 12 (2007), 579–588. * [33] F. Fassò. S. Passarella, and M. Zoppello, Control of locomotion systems and dynamics in relative periodic orbits. To appear in J. Geom. Mech. doi:10.3934/jgm.2020022. * [34] Y. N. Fedorov, Systems with an invariant measure on Lie groups, In _Hamiltonian systems with three or more degrees of freedom (S’Agarò, 1995)_ , 350–356, NATO Adv. Sci. Inst. Ser. C Math. Phys. Sci., 533 Kluwer, Dordrecht, 1999. * [35] M.J. Field, Equivariant dynamical systems. Trans. Am. Math. Soc. 259 (1990), 185–205. * [36] L. Garcia-Naranjo _Reduction of almost Poisson brackets and hamiltonization of the Chaplygin sphere._ Disc. and Cont. Dyn. Syst. Series S, 3, (2010), 37–60. * [37] L.C. García-Naranjo and J. Montaldi _Gauge momenta as Casimir functions of nonholonomic systems._ Arch Rational Mech Anal (2018), 228 (2), pp 563-602 * [38] J. Hermans, A symmetric sphere rolling on a surface. Nonlinearity, 8 (1995), 493–515. * [39] A. Ibort, M. de León, J. C. Marrero, D. Martín de Diego Dirac brackets in constrained dynamics. Fortschritte der Physik, Vol.47 (1999), 459–492. * [40] Il. Iliev and Khr. Semerdzhiev, _Relations between the first integrals of a nonholonomic mechanical system and of the corresponding system freed of constraints._ J. Appl. Math. Mech. 36 (1972), 381–388. * [41] Il. Iliev, _On first integrals of a nonholonomic mechanical system._ J. Appl. Math. Mech. 39 (1975), 147–150. * [42] M. Krupa, Bifurcations of relative equilibria. SIAM J. Math. Anal. 21 (1990), 1453–86. * [43] C.-M. Marle, Reduction of constrained mechanical systems and stability of relative equilibria. Comm. Math. Phys. 174 (1995), 295–318. * [44] C.-M. Marle, Various approaches to conservative and nonconservative nonholonomic systems. Rep. Math. Phys. 42 (1998), 211–229. * [45] C.-M. Marle, _On symmetries and constants of motion in Hamiltonian systems with nonholonomic constraints_ , In “Classical and quantum integrability” (Warsaw, 2001), 223–242, Banach Center Publ. 59 2001, Polish Acad. Sci. Warsaw (2003), 223–242 * [46] J.E. Marsden and T.S. Ratiu, Introduction to Mechanics and Symmetry, 2nd ed. Texts in Appl. Math. New York, Springer, 17, 1999. * [47] J. Milnor, _Curvatures of Left Invariant Metrics on Lie Groups_. Advances in Mathematics 21 (1976), 293–329. * [48] G.L. Naber, Topology, Geometry and Gauge Fields. Applied Mathematical Sciences, 141. Springer, Heidelberg, 1991. * [49] Ch. Nash, Differential Topology and Quantum Field Theory. Academic Press, New York, 1991. * [50] Ju.I. Neimark and N.A. Fufaev, Dynamics of Nonholonomic Systems. Translations of Mathematical Monographs 33 (AMS, Providence, 1972). * [51] J. Ostrowski, A. Lewis, R. Murray and J. Burdick Nonholonomic mechanics and locomotion: the snakeboard example. Proceedings of the 1994 IEEE International Conference on Robotics and Automation. * [52] E.J. Routh, Treatise on the Dynamics of a System of Rigid Bodies (Advanced Part). Dover, New York, 1955. * [53] G. Rudolf and M. Schmdt, Differential Geometry and Mathematical Physics. Part 2. Theoretical and Mathematical Physics Series, 2017. * [54] N. Sansonetto, First integrals in nonholonomic systems. Ph.D. thesis, Università degli Studi di Padova. * [55] D. Sepe and S. Vu Ngoc, Integrable systems, symmetries, and quantization. Lett. Math. Phys. 108 (2018), 499–571. * [56] P. Ševera, A. Weinstein, Poisson Geometry with a 3-form Background. Progress of Theoretical Physics, Vol. 144 (2001), 145-154. * [57] J. Sniatycki, Nonholonomic Noether theorem and reduction of symmetries. Rep. Math. Phys. 42 (1998), 5-23. * [58] A. Van der Shaft and B.M. Mashke, On the Hamiltonian formulation of nonholonomic mechanical systems. Rep. Math. Phys. 34 (1994), 225-233. * [59] D.V. Zenkov, The geometry of the Routh problem. J. Nonlinear Sci. 5 (1995), 503-519. * [60] D.V. Zenkov, _Linear conservation laws of nonholonomic systems with symmetry._ In “Dynamical systems and differential equations” (Wilmington, NC, 2002), Discrete Contin. Dyn. Syst. suppl. (2003), 967–976. * [61] N.T. Zung, Geometry of Integrable non–Hamiltonian Systems, Geometry and Dynamics of Integrable Systems. Advanced Courses in Mathematics, CRM Barcelona. Editors V. Matveev and E. Miranda, Birkhäuser (2016), 85-135.
32k
arxiv_papers
2101.01222
# Dynamics and Rheology of Ring-Linear Blend Semidilute Solutions in Extensional Flow: Single Molecule Experiments Yuecheng Zhou Current address: Department of Chemistry, Stanford University, Stanford, California 94305, USA Department of Materials Science and Engineering, University of Illinois at Urbana-Champaign, Urbana, Illinois 61801, USA Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign, Urbana, Illinois 61801, USA Charles D. Young Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign, Urbana, Illinois 61801, USA Department of Chemical and Biomolecular Engineering, University of Illinois at Urbana-Champaign, Urbana, Illinois 61801, USA Kathryn E. Regan Department of Physics, University of San Diego, San Diego, California 92110, USA Megan Lee Department of Physics, University of San Diego, San Diego, California 92110, USA Sourya Banik Department of Chemical Engineering, Texas Tech University, Lubbock, Texas 79409, USA Dejie Kong Department of Chemical Engineering, Texas Tech University, Lubbock, Texas 79409, USA Gregory B. McKenna Department of Chemical Engineering, Texas Tech University, Lubbock, Texas 79409, USA Rae M. Robertson-Anderson Department of Physics, University of San Diego, San Diego, California 92110, USA Charles E. Sing Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign, Urbana, Illinois 61801, USA Department of Chemical and Biomolecular Engineering, University of Illinois at Urbana-Champaign, Urbana, Illinois 61801, USA Charles M. Schroeder To whom correspondence should be addressed: [email protected] Department of Materials Science and Engineering, University of Illinois at Urbana-Champaign, Urbana, Illinois 61801, USA Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign, Urbana, Illinois 61801, USA Department of Chemical and Biomolecular Engineering, University of Illinois at Urbana-Champaign, Urbana, Illinois 61801, USA ###### Abstract Ring polymers exhibit unique flow properties due to their closed chain topology. Despite recent progress, we have not yet achieved a full understanding of the nonequilibrium flow behavior of rings in nondilute solutions where intermolecular interactions greatly influence chain dynamics. In this work, we directly observe the dynamics of DNA rings in semidilute ring-linear polymer blends using single molecule techniques. We systematically investigate ring polymer relaxation dynamics from high extension and transient and steady-state stretching dynamics in planar extensional flow for a series of ring-linear blends with varying ring fraction. Our results show multiple molecular sub-populations for ring relaxation in ring-linear blends, as well as large conformational fluctuations for rings in steady extensional flow, even long after the initial transient stretching process has subsided. We further quantify the magnitude and characteristic timescales of ring conformational fluctuations as a function of blend composition. Interestingly, we find that the magnitude of ring conformational fluctuations follows a non- monotonic response with increasing ring fraction, first increasing at low ring fraction and then substantially decreasing at large ring fraction in ring- linear blends. A unique set of ring polymer conformations are observed during the transient stretching process, which highlights the prevalence of molecular individualism and supports the notion of complex intermolecular interactions in ring-linear polymer blends. In particular, our results suggest that transient intermolecular structures form in ring-linear blends due to a combination of direct forces due to linear chains threading through open rings and indirect forces due to hydrodynamic interactions; these combined effects lead to large conformational fluctuations of rings over distributed timescales. Taken together, our results provide a new molecular understanding of ring polymer dynamics in ring-linear blends in nonequilibrium flow. ## I Introduction Ring polymers have a topologically closed structure with no beginning or end. Due to their unique properties, ring polymers have captured the attention of rheologists and soft materials scientists for decades [1]. Beyond their intriguing macromolecular structures, ring polymers are of practical importance in several disciplines. In nature, mitochondrial DNA and plasmid DNA generally occur in cyclic forms [2]. Genome organization in cell nuclei has been modeled as a melt of nonconcatenated ring polymers, which represents the simplest model where reptation is suppressed due to topological constraints [3]. Prior work has examined the molecular threading of linear chains through macrocyclic oligomeric rings [4], and recent advances in synthetic organic chemistry have enabled the synthesis of cyclic rings using olefin metathesis [5]. In addition, synthetic ring polymers have been used to generate transient materials with triggered degradation properties [6], thereby providing promising new routes towards the development of fully recyclable synthetic materials [7, 8, 9]. The flow properties of ring polymer solutions and melts have long been a topic of interest in the community. Early efforts to understand the flow behavior of ring polymer melts focused on synthetic polystyrene and polybutadiene rings using shear rheology [10, 11, 12, 13]. In general, ring polymer melts exhibit a smaller zero-shear viscosity, $\eta_{0}$, and a larger recoverable compliance, $J_{e}^{0}$, in the terminal flow regime compared to linear melt counterparts [13]. Ring polymer melts also show a surprising dependence of zero-shear viscosity on molecular weight, $M_{w}$ [13]. In particular, $\eta_{0}$ for ring polymer melts is approximately one-half the value of linear polymer melt counterparts below the critical entanglement molecular weight, $M_{e}$. For higher molecular weight rings, $\eta_{0}$ generally shows a smaller power-law scaling exponent compared to the commonly observed $\eta_{0}\propto M_{w}^{3.4}$ scaling for linear polymers [11, 14, 15], albeit for ring melts with nominal entanglement densities relative to linear polymers of fewer than 15 entanglements per chain. Moreover, ring polymer melts exhibit no rubbery plateau and show a faster terminal relaxation that significantly contrasts with linear polymer melts undergoing stress relaxation [16, 17, 18, 19]. However, the rheological response of ring polymers is highly susceptible to linear chain contamination [20, 12]. For example, it was reported that even a small amount of linear chains (as small as 0.07$\%$ by volume) drastically increases the zero-shear viscosity of ring polymer melts and causes the rubbery plateau to reappear [16]. However, it has been challenging to precisely quantify trace amounts of linear polymers in ring polymer melts, with subsequent experiments showing some differences in rheological response with quantitatively different linear chain content [17]. A major challenge in experimental characterization of ring polymers lies in preparing high purity ring samples that are essentially free of linear chains. To this end, advances in chromatography techniques (liquid chromatography at the critical condition, LCCC) have led to improved separation of rings from linear polymers [21], thereby enabling experimental studies of the linear viscoelasticity [16, 22], non-linear shear rheology [22], and extensional flow properties of LCCC-purified ring melts [23] and ring-linear blends [24]. Nevertheless, prior work has shown that ring polymer samples obtained by post- polymerization cyclization of linear chains followed by chromatographic purification invariably contain small amounts of linear chains that affect rheological measurements despite rigorous purification using the LCCC method [25, 26]. Such observations strongly motivate the need to understand the flow behavior of ring-linear polymer blends [24]. The equilibrium properties of ring-linear blends have been studied for concentrated polymer solutions and melts [27, 28, 29]. Molecular architecture and blend composition both play major roles in determining the conformation and size of ring polymers in these systems. In ring-linear blend melts, increasing the fraction of linear chains leads to an increase in the radius of gyration, $R_{g}$, for rings and a drastic decrease in the ring polymer diffusion coefficient in the blend [29, 28]. Interestingly, the excluded volume exponent, $\nu$, for ring polymers remains relatively constant in good solvent conditions for ring polymer solutions in the presence of linear polymers up to 5-10$\%$ linear chains in solution [30]. Experimental and computational studies have further shown that molecular topology significantly alters the diffusion coefficient of ring polymers. Single molecule experiments on ring DNA show that rings diffuse $\approx$1.3$\times$ faster than linear polymers in concentrated ring DNA solutions [31]. Remarkably, rings diffuse $\approx$10$\times$ slower than linear polymers when the background matrix contains concentrated linear DNA molecules [32, 33]. A marked difference in the molecular weight scaling dependence of polymer diffusion was observed for rings diffusing in concentrated solutions of rings versus linear polymers, implying different underlying mechanisms of chain diffusion in ring versus linear backgrounds [32]. Ring polymers were also reported to exhibit heterogeneous multimodal diffusion when the concentration of the background linear polymer solution increases well into the entangled regime [34]. In ring-linear blends, the center-of-mass diffusion behavior of ring polymers shows rich dynamics due to mixed chain topologies. For example, the diffusion coefficient of rings was found to decrease monotonically upon increasing the fraction of linear polymers from 0 to 50$\%$ in a concentrated ring-linear polymer blend [27]. This phenomenon emerges when the background blend concentration approaches the critical entanglement concentration for linear chains, $c_{e}$, and becomes enhanced when the blend is entangled at concentrations above $c_{e}$. In order to elucidate the key physical features of ring-linear topological constraints, several theoretical models have been proposed to explain the slow-down of ring polymer dynamics in entangled solutions of ring-linear polymer blends. Constraint release (CR) or restricted reptation (RR) of background linear polymers, where rings relax through amoebae-like motion in fixed obstacles formed by background linear chains, was used to model the stress relaxation of rings in melts of ring-linear blends [35, 36]. The once- threaded model (R1) was subsequently developed [37], wherein rings diffuse along a threaded linear chain. Here, ‘threading’ refers to one or more linear chains in the background matrix penetrating into an open ring conformation, resulting in a significant decrease in ring polymer diffusion [38]. Despite these conceptual advances, the actual diffusion mechanism is not fully understood and may be composed of elements from several of these models [39]. The majority of prior work on ring-linear polymer blends has focused on the equilibrium properties such as ring polymer size or ring diffusion in concentrated solutions or melts [32, 31, 33, 34, 29, 27, 19, 40]. However, the flow behavior of ring-linear blends is of paramount importance for processing applications. In 2019, nonlinear extensional rheology was performed on LCCC- purified ring polystyrene melts, with results showing unexpected increases in extensional viscosity at low stretch rates [23] due to topological linking of rings [41]. In 2020, nonlinear extensional rheology combined with molecular dynamics (MD) simulations and ex situ small angle neutron scattering was used to probe the threading-unthreading behavior of ring and linear polymers in melts of ring-linear blends [24]. Whereas shear and extensional rheology and small angle neutron scattering (SANS) characterization provides useful insight into ring-linear polymer blends, such bulk-level experiments only provide information on ensemble- averaged properties, which tends to obscure dynamics at the molecular level. Recent advances in single molecule fluorescence microscopy (SMFM) and automated flow control enable the direct observation of polymer dynamics under nonequilibrium flow conditions [42]. SMFM can be used to identify and characterize molecular sub-populations of polymer chains that adopt different transient conformations or show molecular individualism in flow [43, 44, 45]. In recent years, SMFM has been used to study the dynamics of linear polymers in dilute solution large amplitude oscillatory extensional flow (LAOE) [46, 47] and in semidilute unentangled solutions in extensional flow [48, 49, 50]. Single polymer dynamics was also used to study the relaxation of linear polymers in entangled solutions [51], revealing unexpectedly heterogeneous dynamics. The flow properties of ring polymers in dilute solutions were recently studied using single molecule experiments and simulations [52, 53, 54, 55]. In dilute solution extensional flow, ring polymers show reduced molecular individualism during transient stretching and a shifted coil-stretch transition (CST) compared to linear chains due to combined effects of a closed, constrained molecular topology and intramolecular hydrodynamic interactions between the two ring strands [52, 53]. The dynamics of single ring polymers in the flow- gradient plane of shear was recently studied using a combination of SMFM with a custom shear flow apparatus and Brownian dynamics simulations [56], where it was observed that the probability of chain extension in the flow direction was qualitatively different for rings compared to linear chains in shear flow [56]. Single molecule studies of rings have also been extended to semidilute polymer solutions. Recently, Zhou _et al._ [57] studied the extensional flow dynamics of ring polymers in semidilute solutions of linear polymers near the overlap concentration $c^{*}$, which is defined as the concentration at which linear polymer molecules begin to overlap and interpenetrate at equilibrium and therefore defines the transition between the dilute and semidilute unentangled regimes [35]. In steady extensional flow, rings exhibit large conformational fluctuations in semidilute solutions which was attributed to the transient threading of linear polymers through open rings stretching in flow [57]. Remarkably, such large conformational fluctuations of rings emerged at extremely low concentrations of background linear polymers (0.025 $c^{*}$). Overall, these studies showcase the ability of single molecule techniques to reveal the effects of molecular architecture on the flow dynamics of ring polymers. In this work, we study the relaxation and transient stretching dynamics of ring DNA molecules in semidilute ring-linear polymer blends using SMFM (Fig. 1). Fluorescently labeled ring DNA molecules (45 kbp) are introduced into ring-linear DNA blend solutions of equivalent molecular weight. In this way, we study the flow dynamics of three different ring-linear blends containing 17$\%$ rings (R) and 83$\%$ linear (L) polymers by mass (17$\%$ R-83$\%$ L), 50$\%$ R-50$\%$ L, and 83$\%$ R-17$\%$ L. Results are compared to ring dynamics in semidilute background solutions of purely linear polymers (0$\%$ R-100$\%$ L). Experimental results are complemented by Brownian dynamics (BD) simulations of ring-linear blend solutions. Essential points of comparison are included in this manuscript, and a detailed simulation study for ring dynamics is presented in a companion paper [58]. Our results show that the magnitude of ring conformational fluctuations exhibits a non-monotonic response with increasing ring fraction in blends, first increasing at low ring fraction and then substantially decreasing for large ring fractions in the blend ($>80\%$ R). Conformational fluctuations are quantified in terms of an average fluctuation magnitude $\langle\delta\rangle$ and a characteristic fluctuation timescale using autocorrelation analysis. Interestingly, we identify a unique set of molecular conformations during the transient stretching process for ring polymers, suggesting complex intermolecular interactions between rings and polymer chains in the ring-linear blend solution. We further determine average fractional extension in semidilute ring-linear blends; in all cases, rings show an overall decreased fractional extension for ring-linear blends in extensional flow compared to dilute solution rings or pure linear chains in semidilute solutions. Taken together, experimental and computational results show that transient ring conformations in flow are driven by ring-linear threading interactions and long-range intermolecular hydrodynamic interactions (HI) in semidilute solutions. Figure 1: Schematic of ring-linear DNA polymer blends. Fluorescently labeled tracer ring DNA molecules (45 kbp, red) are uniformly dissolved in a semidilute blend solution of rings (blue) and linear chains (gray). Ring polymer dynamics are investigated at (a) equilibrium or zero-flow conditions and in (b) planar extensional flow. The transient molecular stretch of ring polymers $l_{\mathrm{{circ}}}$ is directly measured using SMFM. ## II Materials and methods A. Preparation of 45 kbp ring and linear DNA Double-stranded 45 kbp ring and linear DNA molecules are prepared via replication of fosmids in Escherichia coli, followed by extraction and purification, as previously described [59, 60, 31]. Briefly, circular DNA molecules are extracted from bacterial cell cultures using alkaline lysis, followed by renaturation of the cloned DNA by treatment with an acidic detergent solution. Genomic DNA and cellular debris precipitate are removed by centrifugation, and supercoiled DNA molecules are converted to relaxed circular conformations via treatment with Topoisomerase-I (New England Biolabs) [60, 61]. To generate linear DNA, restriction endonucleases are used to specifically cut the double stranded DNA ring backbone at precisely one location. Ring and linear polymer samples are treated with RNase A to remove contaminating RNA, and excess protein is removed by phenol-chloroform extraction followed by dialysis. Finally, DNA samples are concentrated by a second isopropanol precipitation, and the molecular topology and concentration are determined using gel electrophoresis [59, 31]. In general, the concentration of the prepared (stock) ring and linear DNA solutions is $\approx$500 $\mu$g/mL. The purity of ring DNA samples is further characterized using single molecule visualization. Here, small amounts of DNA samples are taken from each batch and fluorescently labeled, as described below. Fluorescently labeled samples are then diluted to a concentration of approximately 5$\times 10^{-4}$ ng/$\mu$L in the imaging buffer and introduced into a cross-slot microfluidic device for imaging. In this way, single DNA molecules are stretched in extensional flow, which allows for direct observation and classification of ring or linear topology. This process is repeated for an ensemble of at least 200 molecules for reliable statistics, enabling quantification of the ring-linear fraction in each sample. B. Preparation of semidilute ring-linear blends For all experiments, the total polymer concentration in ring-linear DNA blends was maintained at 50 $\mu$g/mL, which corresponds to $\approx$1 $c^{*}$ for 45 kbp linear DNA [51]. In order to prepare ring-linear DNA blends with varying ring fractions, we first calculate the mass of ring and linear DNA required at different target ring and linear DNA compositions with a desired total volume of 5 mL, which is a typical sample volume used for single molecule imaging and viscosity measurements. Next, based on the stock 45 kbp ring DNA concentration and its ring-linear content, a working volume of 45 kbp ring DNA solution is prepared and heated to 65 ∘C for 10 minutes, followed by snap cooling to 0 ∘C. A working volume of 45 kbp linear DNA is also prepared following a similar procedure. Both working volumes are then concentrated to $\approx$50 $\mu$L using a MiVac Quattro concentrator (Genevac, UK). Next, the concentrated working volumes of 45 kbp ring DNA and linear DNA are mixed and diluted with viscous buffer solution containing 30 mM Tris/Tris-HCl (pH 8.0), 2 mM EDTA, 5 mM NaCl and sucrose (66.3 $\%$ w/w) to a final sample volume of 5 mL. Prepared semidilute ring-linear blends then undergo a gentle rotational mixing procedure for approximately 4 hours at $22.5$ ∘C to ensure sample homogeneity, followed by overnight gentle rotational mixing at 4 ∘C. Ring-linear DNA blends with varying ring fraction and their corresponding properties are shown in Table 1. Table 1: Ring-linear DNA blends studied in this work. The total DNA concentration in all blends was maintained at 50 $\mu$g/mL. Ring DNA | Ring DNA | Linear DNA | Linear DNA | Volume ---|---|---|---|--- ($\%$) | mass ($\mu$g) | ($\%$) | mass ($\mu$g) | (mL) 0 | 0 | 100 | 250 | 5 17 | 42.5 | 83 | 207.5 | 5 50 | 125 | 50 | 125 | 5 83 | 207.5 | 17 | 42.5 | 5 C. Fluorescent labeling of ring DNA For single molecule imaging, DNA is fluorescently labeled with an intercalating dye (YOYO-1, Molecular Probes, Thermo Fisher) at a dye-to-base pair ratio of 1:4 for $>$1 hour in the dark at room temperature. Trace amounts of fluorescently labeled ring DNA are then added to background solutions of unlabeled semidilute ring-linear DNA blends, resulting in a final labeled DNA concentration of $2\times 10^{-3}$ $\mu$g/mL. In addition, a small amount of the reducing agent $\beta$-mercaptoethanol (14 $\mu$M) and coupled enzymatic oxygen scavenging system containing glucose (50 $\mu$g/mL), glucose oxidase (0.01 $\mu$g/mL), and catalase (0.004 $\mu$g/mL) are added into the ring- linear DNA blends to suppress photobleaching and photocleaving of fluorescently labeled DNA molecules. The semidilute ring-linear blend mixture is rotationally mixed for $>$20 minutes before single molecule imaging. Solution viscosity $\eta_{\mathrm{s}}$ is determined using a cone and plate viscometer (Brookfield, USA) at 22.5 ∘C. D. Optics and imaging Single molecule fluorescence microscopy is performed using an inverted epifluorescence microscope (IX71, Olympus) coupled to an electron-multiplying charge coupled device (EMCCD) camera (iXon, Andor Technology). Fluorescently labeled DNA samples are illuminated using a 50 mW 488 nm laser (Spectra- Physics, CA, USA) directed through a neutral density (N.D.) filter (ThorLabs, NJ, USA) and a 488 nm single-edge dichroic mirror (ZT488rdc, Chroma). Fluorescence emission is collected by a 1.45 NA, 100$\times$ oil immersion objective lens (UPlanSApo, Olympus) followed by a 525 nm single-band bandpass filter (FF03-525/50-25, Semrock) and a 1.6$\times$ magnification lens, yielding a total magnification of 160$\times$. Images are acquired by an Andor iXon EMCCD camera (512$\times$512 pixels, 16 $\mu$m pixel size) under frame transfer mode at a frame rate of 33 Hz (0.030 s-1). Images obtained using fluorescence microscopy are analyzed using a custom Matlab code to quantify the polymer conformations, as previously described [57]. The full contour length of fluorescently labeled 45 kbp ring DNA is approximately 20 $\mu$m, such that the stretched contour length of the 45 kbp ring polymer is $L_{\mathrm{{circ}}}$ = 10 $\mu$m [57], which is equal to one-half of the fully stretched contour length of the equivalent linear polymer of identical molecular weight, $L_{\mathrm{lin}}$. E. Microfluidics and flow field characterization Two-layer PDMS microfluidic devices are fabricated using standard techniques in soft lithography, as previously described [47]. In brief, the microfluidic device contains a fluidic layer situated below a control layer containing a fluidic valve. The fluidic layer is fabricated to contain a cross-slot channel geometry with 300 $\mu$m in width and 100 $\mu$m in height. In this way, planar extensional flow is generated in the fluidic layer, and the control layer contains a pressure-driven valve to control fluid flow. Flow field characterization and strain rate determination are performed in ring-linear blends prior to single polymer dynamics experiments using particle tracking velocimetry (PTV), as previously described [46]. F. Modeling and simulation of semidilute ring-linear blends Ring-linear polymer solution blends are modeled by coarse-grained bead-spring chains with approximately 1 Kuhn step per spring. We perform BD simulations to evolve bead positions in an implicit solvent via the Langevin equation of motion: $\frac{d\tilde{\bm{r}}_{i}}{d\tilde{t}}=\tilde{\bm{\kappa}}\cdot\tilde{\bm{r}}_{i}-\sum_{j}\tilde{\textbf{D}}_{ij}\nabla_{\tilde{\bm{r}}_{j}}(\tilde{U})+\tilde{\bm{\xi}}_{i}$ (1) where tildes denote dimensionless quantities. Polymer beads experience flow via the $3N\times 3N$ block diagonal tensor $\tilde{\bm{\kappa}}$, which has $3\times 3$ diagonal blocks given by the solvent velocity gradient tensor $(\nabla\tilde{\textbf{v}})^{T}$. Conservative interactions $\tilde{U}$ are given by the Kremer-Grest potential, which accounts for finite extensibility and prevents spring crossings [62]. Solvent-mediated HI and Stokes drag are included via the diffusion tensor $\tilde{\textbf{D}}_{ij}$, for which we use the Rotne-Prager-Yamakawa tensor [63, 64]. The Brownian noise $\tilde{\bm{\xi}}_{i}$ is approximated by the truncated expansion ansatz [65]. Evaluation of the diffusion tensor and Brownian noise is accelerated by the iterative conformational averaging method [66, 67, 68]. We simulate ring and linear polymers with an equal number of beads per chain $N_{R}=N_{L}=150$. Generally we consider $N_{C}=128$ chains per simulation, with the number of rings and linear chains chosen to match the blend ratios in experiments. Further details and verification of the method are available in previous work [67, 68], and generalization to ring-linear polymer blends is described in a companion article [58]. ## III Results and Discussion A. Longest relaxation time of ring polymers in ring-linear blends Figure 2: Relaxation of ring polymers in semidilute ring-linear polymer blends. Single molecule relaxation trajectories (grey) and ensemble-averaged relaxation trajectories (black) for molecular sub-populations corresponding to (a) single-mode and (b) double-mode exponential relaxation trajectories for ring polymers in a semidilute background solution of 17$\%$ ring - 83$\%$ linear blend at 50 $\mu$g/mL. Error bars are determined from standard deviation of molecular trajectories. (c) Fraction of single-mode and double- mode exponential relaxation behavior as a function of ring polymer fraction in blends. Uncertainties in classifying relaxation behavior are also plotted for cases where single molecule trajectories are not well described by single-mode or double-mode behavior. (d) Longest relaxation times normalized by dilute solution values $\tau_{z}^{circ}$ for ring polymers and $\tau_{z}^{lin}$ for linear polymers from experiments (back diamonds) and BD simulations (red hexagons) in semidilute ring-linear polymer blends as a function of ring polymer fraction. Longest relaxation time for linear polymers in semidilute unentangled solution is also shown as a reference (blue circle) [48]. Experimental molecular ensembles consist $n\geq 50$ single molecules for each blend. We began by determining the longest relaxation time of ring polymers in semidilute ring-linear blends with varying ring fraction (Fig. 2). In all cases, the total polymer concentration was maintained at 50 $\mu$g/mL, which corresponds to the overlap concentration, $c^{*}$, of linear 45 kbp DNA. Ring- linear blends are subjected to a step strain in planar extensional flow at a strain rate $\dot{\epsilon}$ above the coil-stretch transition. The accumulated fluid strain or Hencky strain, $\epsilon$, is calculated as $\epsilon=\int_{0}^{t_{d}}\dot{\epsilon}(t^{\prime})dt^{\prime}$, where $t_{d}$ is the duration of the step strain rate input, $\dot{\epsilon}(t^{\prime})=\dot{\epsilon}H(t^{\prime})$, and $H$ is the Heaviside function. Following the step strain rate input (at times $t>t_{d}$), the flow is stopped and the polymer solution relaxes back to equilibrium. In this way, single polymer relaxation trajectories are obtained as part of the step strain-relaxation experiments, such that fluorescently labeled ring DNA molecules experience at least $\epsilon$ = 20 units of fluid strain prior to cessation of extensional flow. During the flow portion of this step, fluorescently labeled rings are stretched to $\approx$0.6$L_{\mathrm{circ}}$ prior to flow cessation. Longest polymer relaxation times are then determined by fitting the terminal 30$\%$ average squared fraction extension $(l_{\mathrm{circ}}/L_{\mathrm{circ}})^{2}$ to a single-mode or double-mode exponential decay function, as previously described [51, 42, 57] and elaborated on below. Here $l_{\mathrm{circ}}$ denotes the experimentally measured span of polymer extension in the two-dimensional flow plane. Our results reveal two distinct molecular sub-populations for ring polymer relaxation in semidilute ring-linear blend solutions. One molecular sub- population relaxes via a single exponential decay, whereas the second molecular sub-population relaxes via a double exponential decay response. For instance, ring polymer relaxation in a 17$\%$ R-83$\%$ L blend shows that approximately 60$\%$ of the molecular relaxation trajectories are well described by a single-mode exponential decay (Fig. 2a), whereas approximately 40$\%$ of the relaxation trajectories are found to exhibit double-mode exponential decay (Fig. 2b). Here, the single-mode relaxation time $\tau_{s}$ is determined from $(l_{\mathrm{circ}}/L_{\mathrm{circ}})^{2}=A\exp(-t/\tau_{s})+B$, where $A$ and $B$ are numerical constants. The fast and slow double-mode relaxation times $\tau_{d,1}$ and $\tau_{d,2}$ are determined from $(l_{\mathrm{circ}}/L_{\mathrm{circ}})^{2}=A_{1}\exp(-t/\tau_{d,1})+A_{2}\exp(-t/\tau_{d,2})+B$, where $A_{1}$, $A_{2}$, and $B$ are numerical constants. In order to distinguish between single-mode and double-mode exponential decay behavior, molecular relaxation trajectories are fit to both functions to determine the best fit behavior, as previously described [57]. Interestingly, the fraction of single-mode versus double-mode relaxation trajectories depends on ring-linear polymer blend composition (Fig. 2c). In particular, the fraction of double-mode relaxation trajectories shows a maximum for ring-linear blends with 17$\%$ R-83$\%$ L composition and decreases upon further increases in ring fraction. A similar trend of heterogeneous relaxation of ring polymers was recently observed in ring-linear blend melts using MD simulations [69], where the distribution width of different relaxation modes for rings decreased with increasing ring composition in the blend. However, it is important to note that the heterogeneous relaxation behavior for rings observed here is qualitatively different than the relaxation of linear polymers in ultra-dilute solutions (10-5 $c^{*}$) [42], linear polymers in semidilute solutions of purely linear chains [48, 51], and ring polymers in ultra-dilute solutions (10-5 $c^{*}$) [52, 53], all of which exhibit a simple single-mode exponential decay for polymer relaxation in the terminal regime. In semidilute unentangled solutions of purely linear chains (0$\%$ R-100$\%$ L), ring polymers similarly exhibit two distinct molecular sub-populations showing single and double-mode relaxation behavior [57]. The bimodal relaxation behavior is thought to arise from transient intermolecular structures wherein linear polymer chains thread into open ring polymers. Here, ring relaxation is influenced by the presence of threaded linear chains, resulting in a large fraction of double-mode relaxation trajectories for blends with low to intermediate ring fractions (e.g. blend composition of 17$\%$ R-83$\%$ L). We posit that semidilute ring-linear blends with low ring fractions provide a diverse set of local environments in terms of intermolecular HI between rings and linear chains and concentration fluctuations that qualitatively differs from purely linear semidilute polymer solutions, thereby giving rise to the double-mode relaxation behavior. Upon further increasing the ring fraction in ring-linear blends, threading interactions between rings and linear polymers become increasingly less likely such that, on average, rings tend to relax in the absence of intermolecular threading interactions. In addition, the slow phase of the double-mode ring relaxation can also be influenced by solvent-mediated hydrodynamic coupling to linear polymer chain relaxation, discussed in detail in the companion article [58], which is supported by the observation that $\tau_{z}^{lin}\approx 3\tau_{z}^{circ}\approx\tau_{d,2}$, where $\tau_{z}$ denotes the dilute solution longest relaxation time. Indeed, local concentration fluctuations in lightly entangled polymer solutions are known to give rise to multiple molecular sub-populations governing polymer relaxation [51]. Longest relaxation times of ring polymers are plotted in Fig. 2d as a function of ring fraction in ring-linear blends. The longest relaxation time for linear polymers in semidilute solutions (1 $c^{*}$) of purely linear polymers is also plotted as a reference [48]. Moreover, ring polymer relaxation times are included from BD simulations. Relaxation times for ring polymers, including the single-mode timescale $\tau_{s}$ and double-mode timescales $\tau_{d,1}$ and $\tau_{d,2}$, and the longest relaxation time for linear polymers, $\tau_{lin}$, are normalized by their corresponding longest relaxation times in the dilute limit, denoted as $\tau_{z}^{circ}$ and $\tau_{z}^{lin}$, respectively. Non-normalized quantities for all relaxation times are tabulated in Supplementary Table 1. Single- and double-mode relaxation times are relatively independent of ring- linear blend compositions at a total polymer concentration of 50 $\mu$g/mL, or 1 $c^{*}$ corresponding to the linear DNA polymer. The normalized single-mode relaxation time for ring polymers $\tau_{s}$ in ring-linear blends is consistent with the relaxation time for pure linear polymers in semidilute solutions, which supports the hypothesis that the single-mode exponential relaxation behavior corresponds to relaxation of polymers free from topological interactions with surrounding molecules, regardless of chain topology. The normalized slower double-mode relaxation time $\tau_{d,2}$ is approximately 3$\times$ larger than the normalized single-mode relaxation time, whereas the faster double-mode relaxation time $\tau_{d,1}$ is nearly 10$\times$ smaller than the normalized single-mode relaxation time. Interestingly, the single- and double-mode relaxation times appear to be insensitive to blend compositions. Therefore, similar to ring polymer relaxation in pure semidilute linear polymer solutions [57], we posit that double-mode relaxation behavior for rings in ring-linear blends originates from the formation of transient threaded structures between rings and linear polymers. Moreover, our results suggest that these structures are local and do not include long-range linked intermolecular structures in semidilute solutions. However, the probability of forming ring-linear transient structures varies with blend composition, which is reflected in the different fractions of single- and double-mode relaxation behavior in Fig. 2c. Interestingly, prior diffusion measurements show that ring DNA polymer diffusion coefficients remain relatively constant in 100 $\mu$g/mL blends with different ring-linear compositions [27], which is consistent with our results for longest relaxation time. Overall, our results show clear differences between the ring relaxation behavior in semidilute pure linear polymer solutions and ring-linear polymer blends. For semidilute unentangled solutions of purely linear polymers ($c\approx c^{*}$), strictly single-mode relaxation behavior for linear chains is observed [48]. Interestingly, for solutions of purely linear polymers, double-mode relaxation behavior begins to emerge only when the solution concentration is above the critical entanglement concentration $c_{e}$ such that $c>c_{e}$ [51]. On the other hand, our results for ring-linear blends show two distinct molecular sub-populations for ring relaxation in semidilute unentangled solutions at concentrations $c\approx c^{*}$. Taken together, these results highlight the importance of molecular topology on the relaxation dynamics of ring polymers in different ring-linear polymer blend solutions. B. Transient stretching dynamics of rings in ring-linear blends Figure 3: Single molecule trajectories of ring polymers in semidilute ring- linear polymer blends show large conformational fluctuations. Transient fractional extension of ring DNA polymers in 50 $\mu$g/mL semidilute ring- linear polymer blends at $Wi\approx$ 1.5 with (a) 0$\%$ ring - 100$\%$ linear polymers (b) 17$\%$ ring - 83$\%$ linear polymers, (c) 50$\%$ ring - 50$\%$ linear polymers, and (d) 83$\%$ ring - 17$\%$ linear polymers. Individual single molecule trajectories are shown in gray lines and ensemble averaged trajectories are shown in a black line. A characteristic individual single molecule trajectory is highlighted in blue line. Molecular ensembles consist of $n=38$, $n=40$, $n=34$, and $n=39$ molecules for ring polymers in 0$\%$ R-100$\%$ L, 17$\%$ R-83$\%$ L, 50$\%$ R-50$\%$ L, and 83$\%$ R-17$\%$ L blends, respectively. The dashed line indicates the time at which the step strain input is stopped. We next investigated the transient stretching dynamics of ring polymers in semidilute ring-linear blends with different ring fractions. In all cases, the total polymer concentration was maintained at 50 $\mu$g/mL. In these experiments, fluorescently labeled rings are first allowed to relax to an equilibrium conformation for at least 2$\tau_{s}$ in the absence of flow. At time $t=0$, a step strain rate input with precisely controlled strain rate $\dot{\epsilon}$ is imposed on the polymer blend sample for total fluid strain $\epsilon$. The flow strength is characterized by the Weissenberg number $Wi=\dot{\epsilon}\tau_{s}$, which is defined by the strain rate nondimensionalized by the single-mode relaxation time $\tau_{s}$. During the stretching phase, a single fluorescently labeled ring polymer is confined near the stagnation point of planar extensional flow using automated flow control in a device known as Stokes trap [70, 46]. In this way, ring polymers are trapped for long residence times in extensional flow with well-defined strain rates, enabling direct observation of transient and steady stretching dynamics. During this process, only minor corrections are made to the inlet pressure for flow control, such that the strain rate remains constant during the flow phase of the experiment [70, 46]. Following the step deformation, the flow is abruptly halted (denoted by the dashed line in Fig. 3), and ring polymers are allowed to relax back to the thermal equilibrium, as discussed in Sec III.A. Fig. 3 shows the transient fractional extension $l_{\mathrm{circ}}/L_{\mathrm{circ}}$ of ring polymers in semidilute ring- linear polymer blend near $Wi\approx 1.5$ for four different blend compositions. Additional results for $Wi=1$ and $Wi=2.5$ are shown in Supplementary Fig. 1. In all cases, ring polymers are subjected to $\epsilon>20$ units of fluid strain, and 30-40 single molecule trajectories are analyzed for each condition. In Fig. 3, the black curves represent the ensemble-averaged fractional extension over all single molecule trajectories. Individual single molecule trajectories are plotted in gray, and one representative single molecule trajectory is highlighted in blue. Transient stretching trajectories show that ring polymers exhibit large fluctuations in chain extension in extensional flow. Interestingly, conformational fluctuations are observed for ring polymers in all ring-linear blends. Chain fluctuations persist long after the initial transient stretching process has ended, such that chain fluctuations continue even for large amounts of accumulated strain $\epsilon>$ 8-10. Qualitatively, rings exhibit large conformational fluctuations in ring-linear blends with intermediate ring fractions, such as the 17$\%$ R-83$\%$ L blend. Moreover, conformational fluctuations appear to decrease upon further increasing the ring fraction towards the 83$\%$ R-17$\%$ L blend. These trends are consistent with the fraction of single- and double-mode relaxation trajectories in ring-linear blends, as shown in Fig. 2c. BD simulations show that ring polymer conformational fluctuations are driven by both intermolecular threading events with nearby linear chains and solvent- mediated intermolecular HI [58]. These behaviors are discussed in detail in the companion simulation and modeling article [58], and we briefly summarize these findings here with respect to experimental results. In general, highly stretched linear chains in semidilute ring-linear blend solutions induce strong hydrodynamic disturbance flows, generally stronger than more compact ring polymers at the same flow strength. Consequently, these long-range HI disturbance flows drive large conformational fluctuations in ring polymers in flow. In single molecule experiments, we observe unexpected polymer stretching behavior in semidilute solutions that is attributed to intermolecular HI. For example, for 17$\%$ R-83$\%$ L and 50$\%$ R-50$\%$ L blends, some ring polymers fully recoil back to equilibrium levels of extension, followed by re- stretching during continued deformation in flow (blue trajectory in Fig. 3c). This behavior resembles the large conformational fluctuations that arise due to polymer tumbling in dilute solution shear flow [44], although with completely different physical origins. In the case of dilute solution shear flow, the coupling between the rotational and extensional components of flow leads to tumbling behavior for rings [56] and linear polymers [44, 42]. For semidilute ring-linear blends in extensional flow, however, conformational fluctuations arise due to flow-driven intermolecular interactions and long- range HI between polymers with different chain topologies and relaxation times. Upon further increasing the ring fraction, such as in the 83$\%$ R-17$\%$ L blend, ring polymers eventually show smaller magnitude conformational fluctuations and generally stretch to larger fractional extensions. This behavior arises due to a decreased probability of ring-linear threading interactions and decreased magnitude HI disturbance flows in blends with dominant ring fraction, as stretched linear chains generally induce stronger intermolecular HI disturbance flows in ring-linear blends [58]. Figure 4: Probability distribution of ring polymer fractional extension in semidilute ring-linear blends as a function of blend composition near $Wi\approx$ 1.5. Histograms showing probability of ring polymer extension in semidilute ring-linear polymer blends with: (a) 0$\%$ ring -100$\%$ linear, (b) 17$\%$ ring - 83$\%$ linear, (c) 50$\%$ ring - 50$\%$ linear, and (d) 83$\%$ ring - 17$\%$ ring for accumulated fluid strains of $\epsilon=5$, $\epsilon=10$, $\epsilon=15$, $\epsilon=20$ at $Wi\approx$ 1.5. Molecular ensembles consist of $n=38$, $n=40$, $n=34$, and $n=39$ molecules for four different blends, respectively. All experiments are performed at a total polymer concentration of 50 $\mu$g/mL. Data for linear polymers (gray bars) in semidilute solutions of pure linear chains at 50 $\mu$g/mL are from Hsiao _et al._ [48]. The large conformational fluctuations for ring polymers in semidilute ring- linear blends gives rise to broad distributions in ring polymer extension in flow. Histograms of ring polymer extension for four different blend compositions near $Wi\approx$ 1.5 are shown in Fig. 4. Compared to linear polymers in pure semidilute linear polymer solutions, ring polymers in pure semidilute linear polymer solutions (0$\%$ R-100$\%$ L) exhibit broader distributions of molecular extension at smaller fractional extensions (Fig. 4a). For instance, linear polymers are stretched to $l_{\mathrm{lin}}/L>$ 0.6 at $\epsilon$ = 10, while the majority of ring polymers shows a fractional extension of $l_{\mathrm{circ}}/L_{\mathrm{circ}}\approx 0.3$. The probability distribution of ring polymer extension $l_{\mathrm{circ}}/L_{\mathrm{circ}}$ further broadens in the 17$\%$ R-83$\%$ L blend (Fig. 4b). Interestingly, ring polymer extension $l_{\mathrm{circ}}/L_{\mathrm{circ}}$ ranges from $\approx$ 0.2-0.6 and is peaked around $l_{\mathrm{circ}}/L_{\mathrm{circ}}\approx 0.4$ when the accumulated fluid strain increases from $\epsilon$ = 5 to 20. This behavior is consistent with the single molecule transient stretching trajectories for the 17$\%$ R-83$\%$ L blend (Fig. 3b). Moreover, the broad distribution of $l_{\mathrm{circ}}/L_{\mathrm{circ}}$ is consistent with the notion that large conformational fluctuations result from intermolecular interactions between rings and linear polymers in the blend. Upon further increasing the ring fraction in the blend to 50$\%$ R-50$\%$ L, the probability distribution of $l_{\mathrm{circ}}/L_{\mathrm{circ}}$ for rings narrows and shifts toward a smaller extension, with the majority of ring polymers stretched to $l_{\mathrm{circ}}/L_{\mathrm{circ}}\approx 0.3$ (Fig. 4c). These results are consistent with observations from the single molecule trajectories (Fig. 3c), where the 50$\%$ R-50$\%$ L blend shows the smallest average fractional extension. Upon increasing the ring fraction further to 83$\%$ R-17$\%$ L (Fig. 4d), the distributions shift to larger average extensions and rings tend to become more stretched for larger fluid strains. C. Molecular individualism and ring polymer conformational fluctuations Figure 5: Characteristic transient stretching trajectories for ring polymers in semidilute 17$\%$ ring - 83$\%$ linear blends at $Wi\approx 1.5$ from (a) experiments and (b) BD simulations. Corresponding single molecule snapshots are shown in (c) and simulation snapshots are shown in (d). Characteristic transient stretching trajectories for ring polymers in semidilute 83$\%$ ring - 17$\%$ linear blends at $Wi\approx 1.5$ from (e) experiments and (f) BD simulations. Corresponding single molecule snapshots are shown in (g) and simulation snapshots are shown in (h). The Roman numerals correspond to individual time points along the trajectory. The scale bar is 3 $\mu$m in the experimental snapshots. In the simulation snapshots, linear polymers are labeled in blue and ring polymers are labeled in yellow. Characteristic transient trajectories for single ring polymers from single molecule experiments and BD simulations are shown in Fig. 5. A characteristic stretching trajectory for a ring polymer in a 17$\%$ R-83$\%$ L blend at $Wi$ $\approx$ 1.5 is shown in Fig. 5a, together with corresponding single polymer snapshots during the stretching trajectory (Fig. 5c). The Roman numerals correspond to individual time points along the trajectory. Large conformational fluctuations are observed for ring polymers in 17$\%$ R-83$\%$ L blends, with some rings unexpectedly showing recoiling and subsequent restretching behavior in flow, as shown in Fig. 5a (denoted by time point vi). In simple shear flow, ring polymers exhibit repeated cycles of stretching, collapse, and tumbling which arises due to a coupling between the rotational and extensional components of flow. Although extensional flow alone is inadequate to produce the characteristic tumbling behavior observed in shear flow, semidilute blends of ring-linear chains facilitate long-range HI that can generate local disturbance flows. Together with intermolecular threading interactions, these effects give rise to large-scale conformational fluctuations in flow. We further visualized ring conformational fluctuations with molecular simulation, and we show a characteristic dynamic trajectory for a single ring polymer in a 17$\%$ R-83$\%$ L blend at $Wi$ = 1.5 in Fig. 5b. We again plot the fractional ring polymer extension $l_{\text{circ}}/L_{\text{circ}}$ as a function of strain, denoting snapshots (Fig. 5d) at individual time points along the trajectory with Roman numerals. In agreement with the single molecule experiments, BD simulations show large magnitude conformational fluctuations for rings in flow, including large retraction and restretching events in semidilute solution extensional flow. We note that this behavior highlights the importance of long-range HI in generating local flows that are non-extension dominated and contain rotational character, in part because the ring polymer shown in this characteristic trajectory does not exhibit a distinct ring-linear threading event that drives conformational fluctutations. We further explore the role of HI in driving ring conformational fluctuations in the companion article [58]. Prior work has reported that ring conformational fluctuations in semidilute linear polymer solutions can also be caused by threading of linear polymers into the partially stretched, open conformation of rings [57]. Transport of linear polymer chains into open ring polymers in flow leads to repeated threading and unthreading events, giving rise to the repeated cycles of transient chain extension and retraction. Upon increasing the fraction of rings in ring-linear blends, threaded linear chains can (in principle) simultaneously interact with adjacent rings in the blend solution, thereby giving rise to complex intermolecular interactions. Our results suggest that large magnitude conformational fluctuations for rings arise not only due to long-range HI, but also due to the formation of transient ring-linear threaded structures in the blend. Experimental SMFM results show that rings fluctuate drastically in non-dilute flows, and in some cases fully recoil (Figs. 5a-d) in low to intermediate ring fraction blends (17$\%$R-83$\%$L). Interestingly, ring conformational fluctuations drastically decrease as the ring fraction increases (e.g. 83$\%$ R-17$\%$ L blends), showing only minor fluctuations in ring polymer fractional extension (Fig. 5e,g). Analogous results are observed in 83$\%$ R-17$\%$ L blends in BD simulations (Figs. 5f,h), which similarly show smaller magnitude conformational fluctuations at larger ring fraction. These observations are consistent with the notion of a decreased probability of ring-linear interactions with the majority of ring polymers in the blend, including both intermolecular threading events and solvent-mediated hydrodynamic interactions. We quantify ring conformational fluctuations by determining the average fluctuation in fractional chain extension, $\langle\delta\rangle/L_{\mathrm{circ}}$. Here, $\langle\delta\rangle/L_{\mathrm{circ}}$ is defined as the fluctuation quantity in conformational extension fluctuation over the molecular ensemble after the initial transient stretching phase such that: $\frac{\langle\delta\rangle}{L_{\mathrm{{circ}}}}=\frac{\sum_{n=1}^{\mathcal{N}}\sqrt{\sum_{t_{90}}^{t_{f}}[l_{n}(t)-\langle l_{n}\rangle]^{2}}}{\mathcal{N}L_{\mathrm{{circ}}}}$ (2) where $l_{n}(t)$ is the transient polymer extension for individual single polymer chains at time $t$, $\langle l_{n}\rangle$ is the time-averaged or mean polymer extension, and $\mathcal{N}$ denotes the total number of individual trajectories in the ensemble. In Eq. 2, $t_{\mathrm{f}}$ denotes the time when the step strain rate input stops (dashed line in Fig. 3), and $t_{90}$ is defined as the time at which the fractional polymer extension first reaches 90$\%$ of the average fractional extension at time $t_{\mathrm{f}}$. In this way, we discard the initial transient stretching of ring polymers and only compute the chain extension fluctuation quantities after the initial transient phase has died out, as previously reported [57]. In all cases, transient polymer extension is observed for at least 20-25 units of strain in extensional flow. Figure 6: Fluctuations in fractional extension of ring polymers in semidilute ring-linear polymer blends as a function of $Wi$. The average chain fluctuation quantity $\langle\delta\rangle$ is normalized in terms of the contour length $L_{\mathrm{circ}}$ for ring polymers and $L$ for linear polymers, respectively. Each molecular ensemble contains $n\geq 34$ single molecule traces. As shown in Fig. 6, fractional extension fluctuations for ring polymers in semidilute ring-linear blends $\langle\delta\rangle/L_{\mathrm{circ}}$ are compared to linear polymer chain fluctuations in pure semidilute linear polymer solutions $\langle\delta\rangle/L$. Our results show an increase in ring chain fluctuations in semidilute ring-linear polymer blends compared to pure linear polymer solutions. Moreover, ring polymer chain fluctuations in ring-linear blends do not increase monotonically with increasing ring fraction. Ring fluctuations first increase upon increasing the ring fraction in blends, reaching a maximum for 17$\%$ R-83$\%$ L blends, followed by a gradual decrease in magnitude for the 83$\%$ R-17$\%$ L blends. This behavior is consistent with trends observed in the probability distribution of relaxation modes (Fig. 2b) and for single molecule trajectories (Fig. 3) showing a unique set of molecular individualism (Fig. 5), wherein ring conformations show large fluctuations including full retraction to a coiled state during the course of strong extensional flow in 17$\%$ R-83$\%$ L and 50$\%$ R-50$\%$ L blends. Upon increasing the fraction of rings in ring-linear blends, intermolecular interactions between stretched rings and linear chains are gradually suppressed, which results in a eventual decrease in chain fluctuations. In addition, ring polymer chain fluctuations tend to increase in the vicinity of the coil stretch transition (CST), corresponding to Weissenberg numbers near $Wi\approx 1.0$ for pure semidilute linear polymer solutions [48]. The flow-strength dependence of ring polymer conformational fluctuations in ring-linear blends is further discussed in the section below. D. Characteristic timescales of ring polymer conformational fluctuations To further understand ring dynamics in ring-linear blends, we determined the autocorrelation of ring polymer extension fluctuations after the initial transient stretching phase. In particular, we used an autocorrelation analysis to quantify ring polymer extension fluctuations relative to the average polymer extension as a function of $Wi$. The autocorrelation function of a real-valued, integrable fluctuating quantity $x(t)$ is defined as: $C_{x,x}(\lambda)=\langle x(t)x(t+\lambda)\rangle_{t}$ (3) where $\lambda$ is an offset time and $\langle\cdot\rangle_{t}$ denotes a time-averaged quantity. Here, fluctuations in ring polymer extensions are defined as the average (mean) extension $\langle l\rangle_{t}$ subtracted from the instantaneous chain extension $l(t)$. Thus, $l^{\prime}(t)=l(t)-\langle l\rangle_{t}$ and the normalized autocorrelation function $C_{l^{{}^{\prime}},l^{{}^{\prime}}}$ is given by: $C_{l^{\prime},l^{\prime}}(\lambda)\equiv\frac{\langle l^{\prime}(t)l^{\prime}(t+\lambda)\rangle_{t}}{\langle l^{{}^{\prime}2}(t)\rangle_{t}}=\frac{\int_{-\infty}^{\infty}l^{\prime}(t)l^{\prime}(t+\lambda)dt}{\int_{-\infty}^{\infty}l^{{}^{\prime}2}(t)dt}$ (4) The quantity $C_{l^{\prime},l^{\prime}}$ is normalized by the autocorrelation function at zero offset time $\lambda=0$. The initial transient stretching phase (start-up phase) is discarded when calculating the autocorrelation function, similar to the calculation of $\langle\delta\rangle$ where polymer fractional extension is only considered between $t_{90}$ and $t_{\mathrm{f}}$, as described in Section III B. The offset time $\lambda$ is normalized by the strain rate $\dot{\epsilon}$. Figure 7: Quantitative analysis of ring conformational fluctuations in flow. Autocorrelation of conformational fluctuations for ring polymers in semidilute ring-linear polymer blends after the initial start-up phase at $Wi\approx 1.5$ from (a) experiments and (c) BD simulations. Each experimental molecular ensemble contains $n\geq 34$ single molecule traces. The experimental autocorrelation of conformational fluctuations for pure linear semidilute polymer solutions (1 $c^{*}$, 50 $\mu$g/mL) is shown as a reference based on data from Hsiao _et al._ [48]. Characteristic correlation times of ring polymer conformational fluctuations as a function of blend composition at $Wi\approx 1.5$ from (b) experiments and (d) BD simulations. Fig. 7a shows the autocorrelation functions of conformational fluctuations $C_{l^{\prime},l^{\prime}}$ for rings as a function of ring fraction in ring- linear blends at $Wi\approx 1.5$ from experiments. Upon increasing the ring fraction in blends from 0$\%$ R-100$\%$ L to 50$\%$ R-50$\%$ L, the autocorrelation function is relatively constant. However, the autocorrelation function rapidly decays when the fraction of rings increases in the 83$\%$ R-17$\%$ L blend. On the other hand, the autocorrelation function for linear polymers in pure linear semidilute solutions shows a more rapid decay compared to all of the ring-linear blends. In the case of pure linear semidilute solutions, fluctuations in fractional extension mainly arise due to Brownian fluctuations rather than intermolecular hooking interactions. The characteristic autocorrelation decay times are shown in Fig. 7b, defined as the time at which the autocorrelation function in Fig. 7a first equals zero. The dimensionless characteristic decay time (or decorrelation time) is approximately 4.2 strain units for blends with ring fraction between 0$\%$ R-100$\%$ L and 50$\%$ R-50$\%$ L, but significantly decreases to approximately 2.5 strain units for the 83$\%$ R-17$\%$ L blend. Autocorrelation functions of conformational fluctuations from molecular simulations are shown in Fig. 7c, and the characteristic autocorrelation decay times are determined as indicated in Fig. 7d. Qualitative agreement is observed between simulations and SMFM experiments. The autocorrelation function shows similar behavior in blends from 2$\%$ R-98$\%$ L to 50$\%$ R-50$\%$ L, and begins to decay when the ring fraction increases to 83$\%$ R-17$\%$ L. The quickest decay in autocorrelation functions occurs in pure semidilute ring (100$\%$ R-0$\%$ L) and linear polymer solutions and is more rapid than any ring-linear polymer blend solutions. Our results indicate that, in an average sense, the interaction timescale between ring and linear polymer chains remains relatively constant when the majority of polymers in ring-linear blends is linear. However, it is important to note that due to the role of stochasticity, one would expect a distribution of molecular sub-populations over the entire ensemble. From this view, the ensemble-averaged autocorrelation functions may not reflect the drastic variations in molecular extension that may occur in individual molecules, as shown in Fig. 5. Rather, the autocorrelation functions reflect the average intermolecular interaction timescales between rings and adjacent ring or linear polymers in blends. E. Average fractional extension of rings Figure 8: Steady fractional extension in extensional flow. (a) Steady fractional extension of pure semidilute linear polymer solutions at 1 c∗ (50 $\mu$g/mL) and ring polymers in 50 $\mu$g/mL semidilute ring-linear blends in extensional flow. Experimental data for dilute ring polymers are taken from Li _et al._ [52], and data for 1 c∗ linear polymers are taken from Hsiao _et al._ [48]. (b) Average steady fraction extension of ring polymers in different semidilute ring-linear polymer blend at $Wi\approx 1.5$. Each molecular ensemble consists of at least $n\geq$ 34 single molecule trajectories. We further determined the average fractional extension for ring polymers in semidilute ring-linear polymer blends, using a method similar to determining the average fractional extension of polymers under large amplitude oscillatory extensional (LAOE) flow [46, 47]. In brief, we define an average steady fractional extension between $t_{90}$ and $t_{f}$, determined from the fractional extension after the initial transient start-up phase, where $t_{\mathrm{f}}$ is the time at which the step strain rate input stops and $t_{90}$ is the time at which the fractional polymer extension first reaches 90$\%$ of the average fractional extension at time $t_{\mathrm{f}}$. In general, this method corresponds to determining average polymer extension for fluid strains approximately $\epsilon>8$ in steady extensional flow. The average fractional extension for ring polymers in dilute solution [52] and for linear polymers in semidilute solution [48] are plotted as a reference (Fig. 8a). Prior work has shown that ring polymers exhibit a delayed coil-stretch transition in dilute solutions due to intramolecular HI between the two strands in extensional flow [52, 53], which is not observed for linear polymers in dilute solutions [43]. Moreover, linear polymers show a slight increase in the critical Weissenberg number at the coil-stretch transition, $Wi_{c}$, in semidilute unentangled solutions due to intermolecular HI and additional molecular interactions between the neighboring chains. Interestingly, our results show that $Wi_{c}$ for ring polymers depends on the composition of semidilute ring-linear blends, even for the same total polymer concentration of 50 $\mu$g/mL. We estimate $Wi_{c}$ based on the average fractional extension for ring polymers in semidilute ring-linear polymer blends at the coil-stretch transition, $\langle\tilde{l}\mathrm{{}_{c}}\rangle$. Here, $\langle\tilde{l}\mathrm{{}_{c}}\rangle=\langle l\mathrm{{}_{c}}\rangle/L_{\mathrm{{circ}}}$ is determined as the mean value between the coiled and stretched limits in a logarithmic scale, as previously described [71]. Hence, $\langle\tilde{l}\mathrm{{}_{c}}\rangle$ is defined as $ln{\langle\tilde{l}\mathrm{{}_{c}}\rangle}^{2}=(ln{\langle\tilde{l}_{0}\rangle}^{2}+ln{\langle\tilde{l}_{\mathrm{max}}\rangle}^{2})/2$ (5) where $\tilde{l}_{0}=\langle l_{0}\rangle/L_{\mathrm{{circ}}}$ is the fractional extension of ring polymers in the equilibrium coiled state and $\tilde{l}_{\mathrm{max}}=\langle l_{\mathrm{max}}\rangle/L_{\mathrm{{circ}}}$ is the maximum fractional extension for ring polymers in our experiments. In this way, the critical Weissenberg number at the coil-stretch transition is determined by finding the corresponding Weissenberg number at $\langle\tilde{l}\mathrm{{}_{c}}\rangle$, such that $Wi\mathrm{{}_{c}}$ = 1.5, $Wi\mathrm{{}_{c}}$ = 1.1, $Wi\mathrm{{}_{c}}$ = 1.4, and $Wi\mathrm{{}_{c}}$ = 0.9 for ring polymers in 0$\%$ R-100$\%$ L, 17$\%$ R-83$\%$ L, 50$\%$ R-50$\%$ L, and 83$\%$ R-17$\%$ L ring-linear polymer blends, respectively. The $Wi\mathrm{{}_{c}}$ values are in reasonable agreement with the Weissenberg numbers corresponding to the maximum fluctuation quantities $\langle\delta\rangle$ (Fig. 6), as discussed in Section III C. Figure 9: Schematic of intermolecular interactions in semidilute ring-linear blends. (a) In nearly pure semidilute linear polymer solutions (0$\%$ R-100$\%$ L), background linear polymers thread into open ring polymers in extensional flow. Threaded states are not necessarily limited to a doubly- threaded state as shown here. (b) In semidilute ring-linear polymer blends, linear polymers can thread with multiple rings, potentially leading to a local transient interlinked structure in flow. Based on these results, ring polymer dynamics are influenced by both polymer architecture and the relative composition in the blends, thereby affecting both the critical Weissenberg number at the coil-stretch transition and the average fractional extension. Interestingly, the 50$\%$ R-50$\%$ L blend shares the largest $Wi\mathrm{{}_{c}}$ with pure linear polymer solutions (0$\%$ R-100$\%$ L) but shows the smallest average fractional extension. Fig. 8b further shows a direct comparison of average fractional extension as a function of ring-linear blend composition at $Wi\approx 1.5$. Broadly, these results are consistent with recent microrheology experiments on concentrated ring-linear blends of DNA, where the largest plateau modulus was observed for blends containing comparable amounts of ring and linear polymers [72]. Taken together, our results support a scenario in which ring and linear polymers strongly interact in semidilute ring-linear blend solutions (Fig. 9), with clear differences compared to ultra-dilute solutions or pure linear semidilute solutions. ## IV Conclusions Understanding ring polymer dynamics is a particularly challenging and interesting problem in soft materials and rheology. Despite recent progress, we do not yet fully understand the combined roles of molecular architecture, intermolecular interactions, and long-range HI on the dynamics of ring-linear blends under nonequilibrium conditions. In this work, we use single molecule fluorescence microscopy coupled with automated flow control and microfluidics to systematically investigate the nonequilibrium dynamics of ring polymers in semidilute ring-linear DNA blend solutions. Our results show molecular evidence of large conformational fluctuations of ring polymers in steady flows, which arise due to a combination of linear chain threading into open rings and strong intermolecular HI in flow [58]. Our results are consistent with a molecular picture wherein strongly-interacting ring-linear transient structures form and may exhibit local resistance to stretching in flow, especially when the blend contains comparable amounts of ring and linear polymers. The relaxation dynamics of ring DNA polymers in semidilute ring-linear blends is governed by two distinct molecular sub-populations. One sub-population exhibits single-mode exponential relaxation behavior, which is attributed to the relaxation of isolated ring polymers that are not associated with intermolecular transient structures between ring and linear polymers following the cessation of flow. The emergence of a second molecular sub-population showing a double-mode relaxation response likely arises due to interactions of ring polymers with background linear polymers, including ring-linear chain threading and solvent-mediated HI effects. The probability of double-mode exponential relaxation first increases with increasing the ring fraction in the blend, followed by a decrease with increasing ring fraction up to the 83$\%$ R-17$\%$ L blend. These results indicate that different blend compositions alter the degree of interchain interactions between ring and linear polymers, thereby affecting the relaxation dynamics of rings. Our results show strikingly large conformational fluctuations for rings in ring-linear blends in steady extensional flow. By quantifying the conformational fluctuations through a chain fluctuation quantity $\langle\delta\rangle$, our results show that chain fluctuations increase with ring polymer fraction in blends but substantially decrease when the blend contains $>80\%$ of rings. In addition, our results reveal a unique set of molecular conformations and a marked increase in molecular individualism during the transient stretching process for rings in 17$\%$ R-83$\%$ L and 50$\%$ R-50$\%$ L blends. Surprisingly, individual rings are observed to tumble, re-coil, and re-stretch in semidilute ring-linear blends in planar extensional flow, a behavior only observed previously for dilute linear [44, 42] and ring polymers [56] under simple shear flow. These behaviors are attributed to a combination of long-range intermolecular HI in semi-dilute solutions, in addition to chain-chain interactions in flow. Simulations results directly capture these conformational fluctuations and provide further evidence that HI plays an important role in semidilute solutions [58]. The autocorrelation of ring polymer fluctuations shows a slower rate of decay and a longer correlation time for ring polymers in 0$\%$ R-100$\%$ L, 17$\%$ R-83$\%$ L and 50$\%$ R-50$\%$ L blends. Hence, we hypothesize that large conformational fluctuations and the unique molecular individualism are indicative of linear polymers threading into rings to form transient intermolecular structures in flow (Fig. 9). Our results further show a dependence on molecular stretching and conformation as a function of the ring fraction in ring-linear blends. As the fraction of ring polymers increases in the 83$\%$ R-17$\%$ L blend, ring chain extension fluctuations sharply decrease, nearly resembling fluctuations of linear polymers in pure semidilute linear solutions [48]. The small magnitude conformational fluctuations in fractional extension also result in a large average fractional extension, and the autocorrelation function of the conformational fluctuation decays noticeably faster for ring polymers in 83$\%$ R-17$\%$ L than the other three blends. Coarse-grained molecular simulations show that reduced ring fluctuations in semidilute ring-linear blends with high ring fraction occurs due to decreased intermolecular HI effects between highly stretched linear chains and adjacent ring polymers [58]. Taken together, these results provide an new molecular understanding of ring polymer dynamics in ring-linear blends. In particular, our combined experimental and computational results show direct molecular evidence for the transient threading of linear polymers through open ring polymers in flow. From a broad perspective, this work provides an improved understanding of ring dynamics in non-dilute polymer solutions, revealing new information regarding the nonequilibrium dynamics of rings that may be useful informing the future design and processing of polymer solutions with complex molecular architectures. ###### Acknowledgements. This research was supported by the National Science Foundation (NSF) Award CBET-1604038 (Y.Z. and C.M.S.) and partially supported by the NSF through the University of Illinois at Urbana-Champaign Materials Research Science and Engineering Center (MRSEC) DMR-1720633 (Y.Z. and C.M.S.), a PPG-MRL graduate research assistantship award (Y.Z.), a DuPont Science & Engineering fellowship (C.D.Y), NSF Award CBET-1803757 (C.D.Y. and C.E.S.) NSF Award CBET-1603925 (K.E.R., M.L., and R.M.R-A.), and NSF Award CBET-1603943 (S.B., D.K, and G.B.M.). ## References * McLeish [2002] T. C. B. McLeish, Polymers without beginning or end, Science 297, 2005 (2002). * Taanman [1999] J. W. Taanman, The mitochondrial genome: Structure, transcription, translation and replication, Biochimica et Biophysica Acta - Bioenergetics 1410, 103 (1999). * Halverson _et al._ [2014] J. D. Halverson, J. Smrek, K. Kremer, and A. Y. Grosberg, From a melt of rings to chromosome territories: The role of topological constraints in genome folding, Reports on Progress in Physics 77, 022601 (2014), arXiv:1311.5262 . * Deutman _et al._ [2008] A. B. C. Deutman, C. Monnereau, J. A. A. W. Elemans, G. Ercolani, R. J. M. Nolte, and A. E. Rowan, Mechanism of threading a polymer through a macrocyclic ring, Science 322, 1668 (2008). * Edwards _et al._ [2019] J. P. Edwards, W. J. Wolf, and R. H. Grubbs, The synthesis of cyclic polymers by olefin metathesis: Achievements and challenges, Journal of Polymer Science, Part A: Polymer Chemistry 57, 228 (2019). * Feinberg _et al._ [2018] A. M. Feinberg, H. L. Hernandez, C. L. Plantz, E. B. Mejia, N. R. Sottos, S. R. White, and J. S. Moore, Cyclic poly(phthalaldehyde): thermoforming a bulk transient material, ACS Macro Letters 7, 47 (2018). * Lloyd _et al._ [2019] E. M. Lloyd, H. L. Hernandez, A. M. Feinberg, M. Yourdkhani, E. K. Zen, E. B. Mejia, N. R. Sottos, J. S. Moore, and S. W. White, Fully recyclable metastable polymers and composites, Chemistry of Materials 31, 398 (2019). * Rosenthal-Kim and Puskas [2015] E. Q. Rosenthal-Kim and J. E. Puskas, Green polymer chemistry: investigating the mechanism of radical ring-opening redox polymerization (R3P) of 3,6-dioxa-1,8-octanedithiol (DODT), Molecules 20, 6504–6519 (2015). * Rosenthal-Kim and Puskas [2012] E. Rosenthal-Kim and J. E. Puskas, Green polymer chemistry: living oxidative polymerization of dithiols, Pure Appl. Chem. 84, 2121–2133 (2012). * Roovers [1985] J. Roovers, Melt Properties of Ring Polystyrenes, Macomolecules 18, 1359 (1985). * Mckenna _et al._ [1987] G. B. Mckenna, G. Hadziioannou, P. Lutz, G. Hild, C. Strazielle, C. Straupe, P. Rempp, and A. J. Kovacs, Dilute Solution Characterization of Cyclic Polystyrene Molecules and Their Zero-Shear Viscosity in the Melt, Macromelecules 20, 498 (1987). * Roovers [1988] J. Roovers, Viscoelastic properties of polybutadiene rings, Macromolecules 21, 1517 (1988). * McKenna _et al._ [1989] G. B. McKenna, B. J. Hostetter, N. Hadjichristidis, L. J. Fetters, and D. J. Plazek, A study of the linear viscoelastic properties of cyclic polystyrenes using creep and recovery measurements, Macromelecules 22, 1834 (1989). * Halverson _et al._ [2011] J. D. Halverson, W. B. Lee, G. S. Grest, A. Y. Grosberg, and K. Kremer, Molecular dynamics simulation study of nonconcatenated ring polymers in a melt. I. Statics, Journal of Chemical Physics 134, 204904 (2011), arXiv:arXiv:1104.5653v1 . * Pasquino _et al._ [2013] R. Pasquino, T. C. Vasilakopoulos, Y. C. Jeong, H. Lee, S. Rogers, G. Sakellariou, J. Allgaier, A. Takano, A. R. Brás, T. Chang, S. Gooßen, W. Pyckhout-Hintzen, A. Wischnewski, N. Hadjichristidis, D. Richter, M. Rubinstein, and D. Vlassopoulos, Viscosity of ring polymer melts, ACS Macro Letters 2, 874 (2013). * Kapnistos _et al._ [2008] M. Kapnistos, M. Lang, D. Vlassopoulos, W. Pyckhout-Hintzen, D. Richter, D. Cho, T. Chang, and M. Rubinstein, Unexpected power-law stress relaxation of entangled ring polymers., Nature Materials 7, 997 (2008). * Doi _et al._ [2017] Y. Doi, A. Matsumoto, T. Inoue, T. Iwamoto, A. Takano, Y. Matsushita, Y. Takahashi, and H. Watanabe, Re-examination of terminal relaxation behavior of high-molecular-weight ring polystyrene melts, Rheologica Acta 56, 567 (2017). * Sergei P. Obukhov _et al._ [1994] Sergei P. Obukhov, M. Rubinstein, and T. Duke, Dynamics of a Ring Polymer in a Gel, Physical Review Letters 73, 1919 (1994). * Ge _et al._ [2016] T. Ge, S. Panyukov, and M. Rubinstein, Self-Similar Conformations and Dynamics in Entangled Melts and Solutions of Nonconcatenated Ring Polymers, Macromolecules 49, 708 (2016). * McKenna and Plazek [1986] G. McKenna and D. Plazek, Viscosity of blends of linear and cyclic molecules of similar molecular mass, Polymer Communications Guildford 27, 304 (1986). * Lee _et al._ [2000] H. H. C. Lee, H. H. C. Lee, W. Lee, T. Chang, and J. Roovers, Fractionation of Cyclic Polystyrene from Linear Precursor by HPLC at the Chromatographic Critical Condition, Macromolecules 33, 8119 (2000), arXiv:arXiv:1011.1669v3 . * Yan _et al._ [2016] Z. C. Yan, S. Costanzo, Y. Jeong, T. Chang, and D. Vlassopoulos, Linear and Nonlinear Shear Rheology of a Marginally Entangled Ring Polymer, Macromolecules 49, 1444 (2016). * Huang _et al._ [2019] Q. Huang, J. Ahn, D. Parisi, T. Chang, O. Hassager, S. Panyukov, M. Rubinstein, and D. Vlassopoulos, Unexpected Stretching of Entangled Ring Macromolecules, Physical Review Letters 122, 208001 (2019). * Borger _et al._ [2020] A. Borger, W. Wang, T. C. O’Connor, T. Ge, G. S. Grest, G. V. Jensen, J. Ahn, T. Chang, O. Hassager, K. Mortensen, D. Vlassopoulos, and Q. Huang, Threading–Unthreading Transition of Linear-Ring Polymer Blends in Extensional Flow, ACS Macro Letters 9, 1452 (2020). * Doi _et al._ [2015] Y. Doi, K. Matsubara, Y. Ohta, T. Nakano, D. Kawaguchi, Y. Takahashi, A. Takano, and Y. Matsushita, Melt rheology of ring polystyrenes with ultrahigh purity, Macromolecules 48, 3140 (2015). * Molnar _et al._ [2021] K. Molnar, C. A. Helfer, G. Kaszas, E. Krisch, D. Chen, G. B. McKenna, J. A. Kornfield, and J. E. Puskas, Liquid chromatography at critical conditions (LCCC): Capabilities and limitations for polymer analysis, Journal of Molecular Liquids 322 (2021). * Chapman _et al._ [2012] C. D. Chapman, S. Shanbhag, D. E. Smith, and R. M. Robertson-Anderson, Complex effects of molecular topology on diffusion in entangled biopolymer blends, Soft Matter 8, 9177 (2012). * Iyer _et al._ [2007] B. V. S. Iyer, A. K. Lele, and S. Shanbhag, What Is the Size of a Ring Polymer in a Ring-Linear Blend?, Macromelecules 40, 5995 (2007). * Halverson _et al._ [2012] J. D. Halverson, G. S. Grest, A. Y. Grosberg, and K. Kremer, Rheology of ring polymer melts: From linear contaminants to ring-linear blends, Physical Review Letters 108, 038301 (2012), arXiv:1112.3519 . * Gartner _et al._ [2019] T. E. Gartner, F. M. Haque, A. M. Gomi, S. M. Grayson, M. J. A. Hore, and A. Jayaraman, Scaling Exponent and Effective Interactions in Linear and Cyclic Polymer Solutions: Theory, Simulations, and Experiments, Macromolecules 52, 4579 (2019). * Robertson and Smith [2007a] R. M. Robertson and D. E. Smith, Strong effects of molecular topology on diffusion of entangled DNA molecules., Proceedings of the National Academy of Sciences of the United States of America 104, 4824 (2007a). * Robertson and Smith [2007b] R. M. Robertson and D. E. Smith, Self-diffusion of entangled linear and circular DNA molecules: Dependence on length and concentration, Macromolecules 40, 3373 (2007b). * Subramanian and Shanbhag [2008] G. Subramanian and S. Shanbhag, Self-diffusion in binary blends of cyclic and linear polymers, Macromolecules 41, 7239 (2008). * Habuchi _et al._ [2010] S. Habuchi, N. Satoh, T. Yamamoto, Y. Tezuka, and M. Vacha, Multimode diffusion of ring polymer molecules revealed by a single-molecule study, Angewandte Chemie - International Edition 49, 1418 (2010). * Graessley [1982] W. Graessley, Entangled linear, branched and network polymer systems - Molecular theories, in _Synthesis and Degradation Rheology and Extrusion_ (Springer-Verlag Berlin Heidelberg, 1982) pp. 67–117. * Klein [1986] J. Klein, Dynamics of Entangled Linear, Branched, and Cyclic Polymers, Macromolecules 19, 105 (1986). * Mills _et al._ [1987] P. J. Mills, J. W. Mayer, E. J. Kramer, G. Hadziioannou, P. Lutz, C. Strazielle, P. Rempp, and a. J. Kovacs, Diffusion of polymer rings in linear polymer matrices, Macromolecules 20, 513 (1987). * Yang _et al._ [2010] Y.-B. Yang, Z.-Y. Sun, C.-L. Fu, L.-J. An, and Z.-G. Wang, Monte Carlo simulation of a single ring among linear chains: structural and dynamic heterogeneity., Journal of Chemical Physics 133, 064901 (2010). * Tsalikis _et al._ [2016] D. G. Tsalikis, V. G. Mavrantzas, and D. Vlassopoulos, Analysis of Slow Modes in Ring Polymers: Threading of Rings Controls Long-Time Relaxation, ACS Macro Letters 5, 755 (2016). * Kruteva _et al._ [2017] M. Kruteva, J. Allgaier, and D. Richter, Direct observation of two distinct diffusive modes for polymer rings in linear polymer matrices by pulsed field gradient (PFG) NMR, Macromolecules 50, 9482 (2017). * O’Connor _et al._ [2020] T. C. O’Connor, T. Ge, M. Rubinstein, and G. S. Grest, Topological linking drives anomalous thickening of ring polymers in weak extensional flows, Physical Review Letters 124, 027801 (2020). * Schroeder [2018] C. M. Schroeder, Single Polymer Dynamics for Molecular Rheology, Journal of Rheology 62, 371 (2018). * Perkins _et al._ [1997] T. T. Perkins, D. E. Smith, and S. Chu, Single polymer dynamics in an elongational flow, Science 276, 2016 (1997). * Smith _et al._ [1999] D. E. Smith, H. P. Babcock, and S. Chu, Single-polymer dynamics in steady shear flow, Science 283, 1724 (1999). * Soh _et al._ [2018] B. W. Soh, V. Narsimhan, A. R. Klotz, and P. S. Doyle, Knots modify the coil-stretch transition in linear DNA polymers, Soft Matter 14, 1689 (2018). * Zhou and Schroeder [2016a] Y. Zhou and C. M. Schroeder, Single polymer dynamics under large amplitude oscillatory extension, Physical Review Fluids 1, 053301 (2016a). * Zhou and Schroeder [2016b] Y. Zhou and C. M. Schroeder, Transient and Average Unsteady Dynamics of Single Polymers in Large-Amplitude Oscillatory Extension, Macromolecules 49, 8018 (2016b). * Hsiao _et al._ [2017] K.-W. Hsiao, C. Samsal, J. R. Prakash, and C. M. Schroeder, Direct observation of DNA dynamics in semi-dilute solutions in extensional flow, Journal of Rheology 61, 151 (2017), arXiv:1604.06754 . * Samsal _et al._ [2017] C. Samsal, K.-W. Hsiao, C. M. Schroeder, and J. R. Prakash, Parameter-Free Prediction of DNA dynamics in Planar Extensional Flow of Semidilute Solutions, Journal of Rheology 61, 169 (2017). * Young and Sing [2019a] C. D. Young and C. E. Sing, Simulation of semidilute polymer solutions in planar extensional flow via conformationally averaged Brownian noise, Journal of Chemical Physics 151, 124907 (2019a). * Zhou and Schroeder [2018] Y. Zhou and C. M. Schroeder, Dynamically Heterogeneous Relaxation of Entangled Polymer Chains, Physical Review Letters 120, 267801 (2018). * Li _et al._ [2015] Y. Li, K.-W. Hsiao, C. A. Brockman, D. Y. Yates, R. M. Robertson-Anderson, J. A. Kornfield, M. J. San Francisco, C. M. Schroeder, and G. B. McKenna, When ends meet: Circular DNA stretches differently in elongational flows, Macromolecules 48, 5997 (2015). * Hsiao _et al._ [2016] K.-W. Hsiao, C. M. Schroeder, and C. E. Sing, Ring Polymer Dynamics Are Governed by a Coupling between Architecture and Hydrodynamic Interactions, Macromolecules 49, 1961 (2016). * Weiss _et al._ [2017] L. B. Weiss, A. Nikoubashman, and C. N. Likos, Topology-Sensitive Microfluidic Filter for Polymers of Varying Stiffness, ACS Macro Letters 6, 1426 (2017). * Young _et al._ [2019] C. D. Young, J. R. Qian, M. Marvin, and C. E. Sing, Ring polymer dynamics and tumbling-stretch transitions in planar mixed flows, Physical Review E 99, 062502 (2019). * Tu _et al._ [2020] M. Q. Tu, M. Lee, R. M. Robertson-anderson, and C. M. Schroeder, Direct Observation of Ring Polymer Dynamics in the Flow-Gradient Plane of Shear Flow, Macromelecules 53, 9406 (2020). * Zhou _et al._ [2019] Y. Zhou, K.-W. Hsiao, K. E. Regan, D. Kong, G. B. McKenna, R. M. Robertson-Anderson, and C. M. Schroeder, Effect of molecular architecture on ring polymer dynamics in semidilute linear polymer solutions, Nature Communications 10, 1753 (2019). * Young _et al._ [2020] C. D. Young, Y. Zhou, C. M. Schroeder, and C. E. Sing, Dynamics and rheology of semidilute solutions of ring-linear polymer blends in planar extensional flow (2020), arXiv:2011.01386 [cond-mat.soft] . * Laib _et al._ [2006] S. Laib, R. M. Robertson, and D. E. Smith, Preparation and characterization of a set of linear DNA molecules for polymer physics and rheology studies, Macromolecules 39, 4115 (2006). * Robertson _et al._ [2006] R. M. Robertson, S. Laib, and D. E. Smith, Diffusion of isolated DNA molecules: dependence on length and topology., Proceedings of the National Academy of Sciences of the United States of America 103, 7310 (2006). * Peddireddy _et al._ [2020a] K. R. Peddireddy, M. Lee, Y. Zhou, S. Adalbert, S. Anderson, C. M. Schroeder, and R. M. Robertson-anderson, Unexpected entanglement dynamics in semidilute blends of supercoiled and ring DNA, Soft Matter 16, 152 (2020a). * Kremer and Grest [1990] K. Kremer and G. S. Grest, Dynamics of entangled linear polymer melts: A molecular-dynamics simulation, The Journal of Chemical Physics 92, 5057 (1990). * Rotne and Prager [1969] J. Rotne and S. Prager, Variational treatment of hydrodynamic interaction in polymers, The Journal of Chemical Physics 50, 4831 (1969). * Yamakawa [1970] H. Yamakawa, Transport properties of polymer chains in dilute solution: hydrodynamic interaction, The Journal of Chemical Physics 53, 436 (1970). * Geyer and Winter [2009] T. Geyer and U. Winter, An o (n 2) approximation for hydrodynamic interactions in brownian dynamics simulations, The Journal of Chemical Physics 130, 114905 (2009). * Miao _et al._ [2017] L. Miao, C. D. Young, and C. E. Sing, An iterative method for hydrodynamic interactions in brownian dynamics simulations of polymer dynamics, The Journal of Chemical Physics 147, 024904 (2017). * Young _et al._ [2018] C. D. Young, M. Marvin, and C. E. Sing, Conformationally averaged iterative brownian dynamics simulations of semidilute polymer solutions, The Journal of chemical physics 149, 174904 (2018). * Young and Sing [2019b] C. D. Young and C. E. Sing, Simulation of semidilute polymer solutions in planar extensional flow via conformationally averaged brownian noise, The Journal of Chemical Physics 151, 124907 (2019b). * Katsarou _et al._ [2020] A. F. Katsarou, A. J. Tsalikis, D. G. Tsalikis, and V. G. Mavrantzas, Dynamic Heterogeneity in Polymer Blends, Polymers 12, 752 (2020). * Shenoy _et al._ [2016] A. Shenoy, C. V. Rao, and C. M. Schroeder, Stokes trap for multiplexed particle manipulation and assembly using fluidics, Proceedings of the National Academy of Sciences of the United States of America 113, 3976 (2016). * Hernández Cifre and García De La Torre [2001] J. G. Hernández Cifre and J. García De La Torre, Kinetic aspects of the coil-stretch transition of polymer chains in dilute solution under extensional flow, Journal of Chemical Physics 115, 9578 (2001). * Peddireddy _et al._ [2020b] K. R. Peddireddy, M. Lee, C. M. Schroeder, and R. M. Robertson-Anderson, Viscoelastic properties of ring-linear DNA blends exhibit non-monotonic dependence on blend composition, Physical Review Research 2, 023213 (2020b).
16k
arxiv_papers
2101.01227
# Ability of unbounded pairs of observers to achieve quantum advantage in random access codes with a single pair of qubits Debarshi Das [email protected] S. N. Bose National Centre for Basic Sciences, Block JD, Sector III, Salt Lake, Kolkata 700 106, India Arkaprabha Ghosal [email protected] Centre for Astroparticle Physics and Space Science (CAPSS), Bose Institute, Block EN, Sector V, Salt Lake, Kolkata 700 091, India Ananda G. Maity [email protected] S. N. Bose National Centre for Basic Sciences, Block JD, Sector III, Salt Lake, Kolkata 700 106, India Som Kanjilal [email protected] Harish-Chandra Research Institute, Chhatnag Road, Jhunsi, Prayagraj (Allahabad) 211 019, India Arup Roy [email protected] Department of Physics, A B N Seal College, Cooch Behar, West Bengal 736 101, India ###### Abstract Complications in preparing and preserving quantum correlations stimulate recycling of a single quantum resource in information processing and communication tasks multiple times. Here, we consider a scenario involving multiple independent pairs of observers acting with unbiased inputs on a single pair of spatially separated qubits sequentially. In this scenario, we address whether more than one pair of observers can demonstrate quantum advantage in some specific $2\rightarrow 1$ and $3\rightarrow 1$ random access codes. Interestingly, we not only address these in the affirmative, but also illustrate that unbounded pairs can exhibit quantum advantage. Furthermore, these results remain valid even when all observers perform suitable projective measurements and an appropriate separable state is initially shared. ## I Introduction Random access code (RAC) [1, 2, 3] is one of the fundamental communication protocols which, when assisted with quantum resources, manifests the astonishing potential of quantum systems in the context of information processing. In a $n\rightarrow m$ RAC, $n$ is the number of bits ($x_{0}$, $x_{1}$, $\cdots$, $x_{n-1}$) accessed by the sender, say, Alice. On the other hand, $m$ is the number of bits that Alice is allowed to send the receiver, say, Bob with $m<n$. In each run, Bob chooses the number $y$ randomly (where $y\in\\{0,1,\cdots,n-1\\}$) and tries to guess the bit $x_{y}$ accessed by Alice, but unknown to Bob. The efficacy of RAC is limited when only classical strategies are employed. However, one can surpass the best classical strategies using quantum resources, e.g., by using either quantum communication [3] or classical bit communications assisted with a shared bipartite quantum state [4, 5]. RAC assisted with quantum resources was initially introduced [1, 2, 3] in order to demonstrate the immense capabilities of quantum systems in information processing tasks. The state of a $m$-qubit system can be represented by a unit vector in a $2^{m}$ dimensional complex Hilbert space, which opens up the possibility of encoding and transmitting classical information with exponentially fewer qubits, for example, Alice encoding $n$ bits into a $m$-qubit system (where $n>>m$) and sending it to Bob. However, due to the Holevo bound [6], $m$ qubits cannot transmit more than $m$ classical bits of information faithfully. Hence, it can be inferred that exponentially many degrees of freedom of a quantum system remain inaccessible. Nevertheless, the situation becomes interesting when Bob does not need to know all the $n$ bits of information together and chooses which bit of classical information he would like to extract out of the encoding. In order to extract different bits of information, Bob performs different measurements and these measurements are in general not commuting. Thus, by choosing a particular measurement, Bob inevitably disturbs the state and destroys some or all the information that would have been revealed by other possible measurements. This leads to the idea of RAC assisted with quantum resources. RAC has served as a powerful quantum communication task with various applications ranging from quantum finite automata [2, 3, 7], communication complexity [8, 9, 10, 11, 12], non-local games [13], network coding [14, 15], locally decodable codes [16, 17, 18], dimension witnessing [19, 20, 21, 22], quantum state learning [23], self-testing [24, 25, 26, 27], quantum randomness certification [28], quantum key distribution [29], studies of no-signaling resources [30] to characterising quantum mechanics from information-theoretic principles [31]. Experimental demonstrations of RAC protocols have also been reported [32, 33]. In the present study, we consider RAC using classical communications assisted with shared quantum correlations. In reality, it is experimentally difficult to create any quantum correlation. Moreover, environmental interactions unavoidably degrade the efficacy of any quantum correlation. To cope with these, one can recycle a single copy of any quantum resource multiple times. Furthermore, this also indicates how much quantumness in a correlation is preserved even after few cycles of local operations. Historically, this issue was first addressed by Silva et al. [34], where two spatially separated spin-$\frac{1}{2}$ particles were assumed to be shared between a single Alice and multiple independent Bobs. In this scenario, the maximum number of Bobs was deduced [34, 35, 36, 37, 38, 39] that can demonstrate Bell nonlocality [40]. This idea of sharing quantum correlations by multiple sequential observers has been extended in different contexts as well [41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56]. The applications of sequential sharing of quantum correlations in different information processing tasks have also been demonstrated [27, 57, 58, 59, 60, 61, 62, 63]. In all these studies, multiple observers performing sequential measurements on only one qubit have been considered, whereas the present study contemplates multiple observers performing sequential measurements on each of the two qubits. This is a more general and practical scenario for re-utilizing quantum correlations in commercial quantum technologies. In particular, we focus on recycling a single quantum resource in sequentially carrying out RAC tasks multiple times. Here, we consider the scenario where a two-qubit state is shared between two spatially separated wings. Multiple independent Alices (say, Alice1, Alice2, Alice3, $\cdots$) and multiple independent Bobs (say, Bob1, Bob2, Bob3, $\cdots$) act sequentially on the first and second qubit respectively with unbiased inputs. At first, Alice1-Bob1 executes the RAC task with the initially shared two-qubit state. Afterwards, Alice1 passes her qubit to Alice2 and Bob1 passes his qubit to Bob2. Next, Alice2-Bob2 also passes the two qubits to Alice3-Bob3 after performing the RAC task and so on. In the above scenario, we show that unbounded pairs of Alice-Bob (i.e., Alice1-Bob1, Alice2-Bob2, $\cdots$) can gain quantum advantage in executing RAC tasks. Specifically, we demonstrate that the above result holds 1) when all pairs always perform some particular $2\rightarrow 1$ RAC, 2) when all pairs always perform some particular $3\rightarrow 1$ RAC task, 3) when each of the pairs always performs either a $2\rightarrow 1$ RAC or a $3\rightarrow 1$ RAC independent of other pairs, 4) when each pair performs a $2\rightarrow 1$ RAC and a $3\rightarrow 1$ RAC with different probabilities independent of other pairs. While comparing the classical and quantum strategies to demonstrate quantum advantage, we restrict the amount of shared classical bits to be equal to the amount of shared quantum bits. This constraint is quite natural in the sense that classical bits, similar to qubits, are expensive resources [5, 64, 65, 66]. Since, the aforementioned scenario involves two qubits, quantum strategies are compared with the classical ones assisted with two bits from a common source. The rest of the paper is arranged as follows. In Section II we review the $2\rightarrow 1$ and $3\rightarrow 1$ RAC protocols assisted with classical communication and a two-qubit state. The scenario considered by us and the main results are presented in Section III. Finally, we conclude with a short discussion in Section IV. ## II $2\rightarrow 1$ and $3\rightarrow 1$ RAC protocols assisted with classical communication and a two-qubit state Let us now describe the $n\rightarrow 1$ (with $n\in\\{2,3\\}$) RAC protocol using limited classical communication and shared two-qubit state. At first, Alice is given a string of $n$ bits $x=(x_{0},x_{1},\cdots,x_{n-1})$ chosen randomly from a uniform distribution with $x_{i}\in\\{0,1\\}$ for all $i\in\\{0,1,\cdots,n-1\\}$. Next, depending on the input bit string, Alice performs one of the $2^{n}$ dichotomic measurements denoted by $A_{x_{0}x_{1}\cdots x_{n-1}}$ on her qubit. The outcome of the measurement $A_{x_{0}x_{1}\cdots x_{n-1}}$ is denoted by $a_{x_{0}x_{1}\cdots x_{n-1}}\in\\{0,1\\}$. Alice then communicates the outcome of her measurement to Bob with one bit of information. Next, Bob tries to guess one of the $n$ bits $x_{y}$ (with $y\in\\{0,1,\cdots,n-1\\}$) given to Alice (in each run $y$ is chosen randomly). For this purpose, Bob performs one of the $n$ dichotomic measurements denoted by $B_{y}$ on his qubit. The outcome of the measurement $B_{y}$ is denoted by $b_{y}\in\\{0,1\\}$. Finally, Bob’s guess is given by $a_{x_{0}x_{1}\cdots x_{n-1}}\oplus b_{y}$. Hence, the RAC task will be successful, i.e., Bob’s guess will be correct if and only if $a_{x_{0}x_{1}\cdots x_{n-1}}\oplus b_{y}=x_{y}$. In the present study, we will quantify the efficacy of the RAC protocol by minimum success probability defined as, $\displaystyle P_{\texttt{Min}}^{n\rightarrow 1}$ $\displaystyle=\min\limits_{x_{0},x_{1},\cdots,x_{n-1},y}\,\,P(a_{x_{0}x_{1}\cdots x_{n-1}}\oplus b_{y}=x_{y}).$ (1) ## III Results We consider a scenario involving multiple independent Alices and multiple independent Bobs as described in Fig. 1. Alice1-Bob1 initially shares one pair of qubit in the singlet state, $|\psi^{-}\rangle=\frac{1}{\sqrt{2}}(|01\rangle-|10\rangle)$. This first pair performs the aforementioned RAC task and then, Alice1, Bob1 pass their particles to Alice2, Bob2 respectively. Alice2, Bob2 also pass their particles to Alice3, Bob3 respectively after executing the RAC. In this way, the process continues. Note that each of the observers act with unbiased inputs. Here we want to find out how many pairs of Alice and Bob can exhibit quantum advantage. If any pair performs projective measurements, it will disturb the state maximally and the next pair may not get any quantum advantage. Hence, in order to continue the above sequential RAC task with multiple pairs of Alice- Bob, we consider weak measurements by all pairs. We should choose the weak measurement formalism in such a way that the disturbance due to this measurement is minimized for any given amount of information gain [34]. One such example is unsharp measurement (a particular class of Positive Operator- Valued Measure or POVM) [67] with generalized von Neumann-Lüders state- transformation rule [35, 42]. Figure 1: Scenario for performing $n\rightarrow 1$ RAC task with multiple pairs of observers sequentially. In the present paper, we consider two particular RAC tasks. The first one is the $2\rightarrow 1$ RAC task assisted with two (quantum or classical) bits, shared from a common source and having maximally mixed marginal at the receiver’s end. In the classical strategy, a source produces two correlated bits which are shared by Alice and Bob. The two binary values $0$ and $1$ of Bob’s bit are equiprobable. Consequently, Alice’s encoding and Bob’s decoding strategies are now assisted with these bits. The minimum success probability of such a classical RAC task is always less than or equal to $\frac{1}{2}$ [5]. In case of quantum strategy, two-qubit states with maximally mixed marginal at Bob’s end can only be shared in the context of this task and $P^{2\rightarrow 1}_{\texttt{Min}}>\frac{1}{2}$ implies quantum advantage. Another RAC task that we consider is the $3\rightarrow 1$ RAC task assisted with two (quantum or classical) bits shared from a common source. There is no restriction on the marginals of the shared bits in this case. For classical strategies, the minimum success probability is always less than or equal to $\frac{1}{2}$ [5]. Hence, $P^{3\rightarrow 1}_{\texttt{Min}}>\frac{1}{2}$ ensures quantum advantage. Suppose the pair Alicek-Bobk for arbitrary $k\in\\{1,2,\cdots\\}$ performs the above RAC using the shared Bell-diagonal state, $\rho^{k}_{AB}=\frac{1}{4}\left(\mathbf{I}_{4}+\sum_{i=1}^{3}t^{k}_{ii}\,\sigma_{i}\otimes\sigma_{i}\right),$ (2) where $(t^{k}_{uu})^{2}\geq(t^{k}_{vv})^{2}\geq(t^{k}_{ww})^{2}$ for an arbitrary choice of $u\neq v\neq w\in\\{1,2,3\\}$; $\sigma_{i}$ with $i=1,2,3$ are the three Pauli matrices. Next, let us present the encoding-decoding strategies adopted by the pair Alicek-Bobk. In case of the $2\rightarrow 1$ RAC, Alicek performs one of the four POVMs denoted by $A^{k}_{x_{0}x_{1}}\equiv\\{E^{k,0}_{x_{0}x_{1}},E^{k,1}_{x_{0}x_{1}}\\}$ with $(x_{0},x_{1})\in\\{(00),(01),(10),(11)\\}$, where $E^{k,\,a^{k}_{x_{0}x_{1}}}_{x_{0}x_{1}}=\frac{1}{2}\left[\mathbf{I}_{2}+\lambda^{k}\,(-1)^{a^{k}_{x_{0}x_{1}}}\,\,\left(\hat{u}^{k}_{x_{0}x_{1}}\cdot\vec{\sigma}\right)\right].$ (3) Bobk performs one of the two POVMs denoted by $B^{k}_{y}\equiv\\{E^{k,0}_{y},E^{k,1}_{y}\\}$ with $y\in\\{0,1\\}$, where $E^{k,\,b^{k}_{y}}_{y}=\frac{1}{2}\left[\mathbf{I}_{2}+\eta^{k}\,(-1)^{b^{k}_{y}}\,\left(\hat{v}^{k}_{y}\cdot\vec{\sigma}\right)\right].$ (4) Here $\lambda^{k}$, $\eta^{k}$ $\in$ $(0,1]$ are the sharpness parameters; $\vec{\sigma}=(\sigma_{1},\sigma_{2},\sigma_{3})$; $a^{k}_{x_{0}x_{1}},\,b^{k}_{y}\in\\{0,1\\}$ denote the outcomes of the POVMs $A^{k}_{x_{0}x_{1}}$ performed by Alicek and $B^{k}_{y}$ performed by Bobk respectively. The unit vectors $\hat{u}^{k}_{x_{0}x_{1}}$ and $\hat{v}^{k}_{y}$ are given by, $\displaystyle\hat{u}^{k}_{x_{0}x_{1}}=\left(\dfrac{(-1)^{x_{0}}t^{k}_{11}}{\sqrt{(t^{k}_{11})^{2}+(t^{k}_{22})^{2}}},\dfrac{(-1)^{x_{1}}t^{k}_{22}}{\sqrt{(t^{k}_{11})^{2}+(t^{k}_{22})^{2}}},0\right),$ (5) $\displaystyle\hat{v}^{k}_{0}=\left(1,0,0\right),\quad\quad\quad\quad\quad\quad\quad\quad\quad\hat{v}^{k}_{1}=\left(0,1,0\right).$ (6) On the other hand, for executing the aforementioned $3\rightarrow 1$ RAC, Alicek performs one of the eight possible POVMs denoted by $A^{k}_{x_{0}x_{1}x_{2}}\equiv\\{E^{k,0}_{x_{0}x_{1}x_{2}},E^{k,1}_{x_{0}x_{1}x_{2}}\\}$ with $x_{i}\in\\{0,1\\}$ for all $i\in\\{0,1,2\\}$, where $E^{k,\,a^{k}_{x_{0}x_{1}x_{2}}}_{x_{0}x_{1}x_{2}}=\frac{1}{2}\left[\mathbf{I}_{2}+\lambda^{k}\,(-1)^{a^{k}_{x_{0}x_{1}x_{2}}}\,\,\left(\hat{u}^{k}_{x_{0}x_{1}x_{2}}\cdot\vec{\sigma}\right)\right].$ (7) Bobk performs one of the three POVMs denoted by $B^{k}_{y}\equiv\\{E^{k,0}_{y},E^{k,1}_{y}\\}$ with $y\in\\{0,1,2\\}$, where $E^{k,\,b^{k}_{y}}_{y}=\frac{1}{2}\left[\mathbf{I}_{2}+\eta^{k}\,(-1)^{b^{k}_{y}}\,\left(\hat{v}^{k}_{y}\cdot\vec{\sigma}\right)\right].$ (8) We choose the unit vectors $\hat{u}^{k}_{x_{0}x_{1}x_{2}}=\dfrac{\vec{u}^{k}_{x_{0}x_{1}x_{2}}}{|\vec{u}^{k}_{x_{0}x_{1}x_{2}}|}$ and $\hat{v}^{k}_{y}$ as follows, $\displaystyle\vec{u}^{k}_{x_{0}x_{1}x_{2}}=\Big{(}(-1)^{x_{0}}t^{k}_{11},\,(-1)^{x_{1}}t^{k}_{22},\,(-1)^{x_{2}}t^{k}_{33}\Big{)},$ (9) $\displaystyle\hat{v}^{k}_{0}=\left(1,0,0\right),\quad\hat{v}^{k}_{1}=\left(0,1,0\right),\quad\hat{v}^{k}_{2}=\left(0,0,1\right).$ (10) With these, we can present the following lemma (for proof, see Appendix A), which will be useful for probing the main result, ###### Lemma 1. Let Alicek-Bobk performs the $n\rightarrow 1$ RAC task (where $n=2$ or $n=3$) with a two-qubit Bell-diagonal state (2) using the above unsharp measurements. Then the pair achieves minimum success probability strictly greater than $\frac{1}{2}$ if $\min\limits_{i\leq n}\left[(t^{k}_{ii})^{2}\right]\neq 0$. Next, we want to find out the post-measurement state $\rho^{k+1}_{AB}$ received, on average, by Alicek+1-Bobk+1 from Alicek-Bobk. When Alicek-Bobk performs the $2\rightarrow 1$ RAC, following the generalized von Neumann- Lüder’s transformation rule, we have (see Appendix B) $\displaystyle\rho^{k+1}_{AB}$ $\displaystyle=\frac{1}{8}\sum_{x_{0},x_{1},y=0}^{1}\Bigg{[}\sum_{a^{k}_{x_{0}x_{1}},b^{k}_{y}=0}^{1}\Bigg{(}\sqrt{E^{k,\,a^{k}_{x_{0}x_{1}}}_{x_{0}x_{1}}}\otimes\sqrt{E^{k,\,b^{k}_{y}}_{y}}\Bigg{)}$ $\displaystyle\quad\quad\quad\quad\quad\quad\quad\quad\rho_{AB}^{k}\Bigg{(}\sqrt{E^{k,\,a^{k}_{x_{0}x_{1}}}_{x_{0}x_{1}}}\otimes\sqrt{E^{k,\,b^{k}_{y}}_{y}}\Bigg{)}^{\dagger}\Bigg{]}$ $\displaystyle=\frac{1}{4}\left(\mathbf{I}_{4}+\sum_{i=1}^{3}t^{k+1}_{ii}\,\sigma_{i}\otimes\sigma_{i}\right).$ (11) The average is taken since we have assumed that multiple Alices or multiple Bobs act independently of each other. Here, we have also used the assumption that Alicek and Bobk perform measurements with unbiased inputs. Similarly, when Alicek-Bobk performs the $3\rightarrow 1$ RAC, it is observed that the average post-measurement state $\rho^{k+1}_{AB}$ received by Alicek+1-Bobk+1 has the Bell-diagonal form (11) (see Appendix C for details). Moreover, when Alicek-Bobk performs the $n\rightarrow 1$ RAC task (where $n=2$ or $n=3$) with the state (2), it can be shown that $\min\limits_{i\leq n}\left[(t^{k+1}_{ii})^{2}\right]\neq 0$ if $\min\limits_{i\leq n}\left[(t^{k}_{ii})^{2}\right]\neq 0$ (For details, see Appendix B and Appendix C). Now, consider that the same $n\rightarrow 1$ RAC (i.e., either the $2\rightarrow 1$ or the $3\rightarrow 1$ RAC) is performed by each of the pairs. In such scenario, combining the above results, we can present the following: if Alice1-Bob1 initially shares the singlet state, then this pair achieves $P^{n\rightarrow 1}_{\texttt{Min}}>\frac{1}{2}$ (with $n=2$ or $n=3$) using the aforementioned unsharp measurements. Moreover, the average post- measurement state $\rho^{2}_{AB}$ received by Alice2-Bob2 is the Bell-diagonal state (2) with $k=2$ and $\min\limits_{i\leq n}\left[(t^{2}_{ii})^{2}\right]\neq 0$. Hence, Alice2-Bob2 also achieves $P^{n\rightarrow 1}_{\texttt{Min}}>\frac{1}{2}$. Subsequently, Alice3-Bob3 receives the Bell-diagonal state (2) with $k=3$ and $\min\limits_{i\leq n}\left[(t^{3}_{ii})^{2}\right]\neq 0$ and exhibits $P^{n\rightarrow 1}_{\texttt{Min}}>\frac{1}{2}$ as well. This process continues for arbitrarily many pairs. Therefore, we can present the following theorem, ###### Theorem 1. Unbounded pairs of Alice and Bob can demonstrate quantum advantage either in $2\rightarrow 1$ RAC task assisted with two bits shared from a common source and having maximally mixed marginal at the receiver’s end, or in $3\rightarrow 1$ RAC task assisted with two correlated bits. Importantly, the statements of Theorem 1 hold for all values of $\lambda^{k}$ $\in$ $(0,1]$ and $\eta^{k}$ $\in$ $(0,1]$ for all possible $k$ $\in$ $\\{1,2,\cdots\\}$. Moreover, for the aforementioned $n\rightarrow 1$ RAC with $n=2$ or $n=3$, starting with any Bell-diagonal two-qubit (entangled or separable) state given by Eq.(2) with $k=1$ and $\min\limits_{i\leq n}\left[(t^{1}_{ii})^{2}\right]\neq 0$, one gets the same result as stated in Theorem 1. Hence, the following corollary can be stated, ###### Corollary 1. Unbounded pairs of Alice and Bob can exhibit quantum advantage in some particular $n\rightarrow 1$ RAC task (with $n=2$ or $n=3$) even when each of the observers performs suitable projective measurements and the initially shared two-qubit state belongs to a particular subset of separable states. When Alice1-Bob1 initially shares the singlet state and performs the aforementioned $n\rightarrow 1$ RAC (where $n=2$ or $n=3$) using the measurements described earlier with $\lambda^{1}=\eta^{1}=1$ (i.e., projective measurements), then this pair achieves $P^{n\rightarrow 1}_{\texttt{Min}}=\frac{1}{2}\left(1+\frac{1}{\sqrt{n}}\right)$. This is the maximum permissible value of $P^{n\rightarrow 1}_{\texttt{Min}}$ with quantum resources [4]. In this case also, the residual quantum correlation in the average post-measurement state is sufficient for demonstrating quantum advantage in the $n\rightarrow 1$ RAC by unbounded pairs of Alice and Bob. Hence, a single pair of qubits can be utilized indefinitely to gain quantum advantage in some particular RAC even when the optimal quantum advantage is exhibited in the first round. Remark: We observe that when an arbitrary pair gains a large amount of quantum advantage, then only few numbers of subsequent pairs will get ‘significant’ quantum advantage. On the other hand, when a pair gets a small amount of quantum advantage, a larger number of subsequent pairs can achieve ‘significant’ quantum advantage. Here, ‘significant’ quantum advantage implies that $\left(P_{\texttt{Min}}^{n\rightarrow 1}-\frac{1}{2}\right)$ is positive and large enough to be detected in a real experiment. Hence, there may exist a trade-off relation between the amount of quantum advantage gained by an arbitrary pair and the number of subsequent pairs exhibiting considerable amount of quantum advantage. Moreover, either of these two quantities can be increased at the expense of the other by suitably choosing the sharpness parameters of the measurements (See Appendix D). In practical scenario, a large but finite number of sequential pairs of observers may be required to perform some communication tasks with only one pair of qubits. The number of sequential pairs required to exhibit quantum advantage depends on the particular context under consideration and that can be realized by fine-tuning the unsharpness of the measurements. Next, we consider a more general scenario where an arbitrary pair Alicek-Bobk performs the aforementioned $2\rightarrow 1$ RAC task with probability $p_{k}$ and the aforementioned $3\rightarrow 1$ RAC with probability $(1-p_{k})$, where $0\leq p_{k}\leq 1$. For example, Alicek and Bobk can fix the task to be performed in each experimental run prior to the initiation of sequential RAC and, during the execution of sequential RAC, they perform the two different tasks accordingly. This type of scenario is particularly relevant when a sequence of RAC tasks is implemented as an intermediate step in commercial quantum computation. In such cases, different tasks may be required to be performed by the same pair of particles in different steps depending on the choices of users. In this scenario, if a singlet state or any Bell-diagonal two-qubit (entangled or separable) state given by Eq.(2) with $k=1$ and $\min\left[(t^{1}_{11})^{2},(t^{1}_{22})^{2},(t^{1}_{33})^{2}\right]\neq 0$ is initially shared, then the following result is attained (see Appendix E for details), ###### Corollary 2. Unbounded pairs of Alice and Bob can demonstrate quantum advantage when an arbitrary pair Alicek-Bobk performs a $2\rightarrow 1$ RAC (assisted with two correlated bits with maximally mixed marginal at the receiver’s end) with probability $p_{k}$ and a $3\rightarrow 1$ RAC (assisted with two bits shared from a common source) with probability $1-p_{k}$ independent of other pairs. When Alicek-Bobk performs projective measurements and $p_{k}=1$ (i.e., performs $2\rightarrow 1$ RAC with certainty), then the condition: $\min\left[(t^{x}_{11})^{2},(t^{x}_{22})^{2},(t^{x}_{33})^{2}\right]\neq 0$ will not be satisfied for the average post-measurement state received by all subsequent pairs (i.e., for all $x\in\\{k+1,k+2,\cdots\\}$). Hence, all these pairs will not achieve quantum advantage in $3\rightarrow 1$ RAC. Hence, only under unsharp measurements (with the sharpness parameters being strictly less than $1$), we can state the following corollary (see Appendix E for details), ###### Corollary 3. Unbounded pairs of Alice and Bob can demonstrate quantum advantage when an arbitrary pair Alicek-Bobk performs a $2\rightarrow 1$ RAC with certainty and another arbitrary pair Alice${}^{\tilde{k}}$-Bob${}^{\tilde{k}}$ performs a $3\rightarrow 1$ RAC with certainty for all choices of $k\neq\tilde{k}\in\\{1,2,\cdots\\}$. ## IV Conclusions Here we have considered a scenario involving multiple independent pairs of Alice and Bob sharing a single pair of qubits and performing some particular $2\rightarrow 1$ and $3\rightarrow 1$ RAC tasks with unbiased inputs sequentially. In this scenario, we have shown that unbounded pairs can gain quantum advantage even when all observers perform projective measurements. These results address the issue of recycling a single copy of a quantum resource in performing information processing tasks multiple times sequentially. This is of utmost importance since, in reality, preparing quantum correlations and preserving them against inevitable environmental interactions are difficult. Our results point out that quantum correlations present in separable states [68] can be preserved indefinitely in spite of utilizing it in each step. Furthermore, weak measurements are not necessary for this purpose; suitable projective measurements can serve for this. Note that this is not the case for entanglement or Bell-nonlocality. Hence, these results signify one fundamental difference between the quantum correlations present in entanglement and that present in separable states: the first one is destroyed only after one cycle of projective measurements while the second one is retained even after infinite cycles. The advantage of quantum information processing tasks assisted with separable states [5, 66] is thus pointed out by our present study. In fact, our results open up the possibility of implementing unbounded sequence of any task, for which quantum advantage can be demonstrated even using separable states (say, for example, remote state preparation [69]), with only one pair of qubits. There exists a complementarity between the question addressed here and the one-way communication complexity problem [70, 71]. In one-way communication complexity problem, Alice and Bob are given inputs $x\in\left\\{0,1\right\\}^{n}$ and $y\in\left\\{0,1\right\\}^{m}$ respectively. The goal for Bob is to calculate a binary function $f(x,y)$. Alice is allowed to send limited classical communications to Bob. This game can be thought as a number of parallel RACs taking place simultaneously. The main goal of any communication complexity problem is to minimize the amount of classical communication. However, there is no restriction on the shared entanglement. On the contrary, the present study is aimed to reduce the amount of shared correlation, but does not focus on reducing the number of communicating bits. Recently, measurement protocols have been proposed to demonstrate arbitrary many Bell-CHSH inequality [40] violations with various independent Bobs and a single Alice using unbiased inputs when a pure entangled two-qubit state is initially shared [39]. The result, however, requires arbitrarily high precision engineering for the measurement apparatus and, hence, is too strenuous to implement in a reality. On the other hand, the unsharp measurements chosen in the present study can be realized in photonic systems based on the techniques adopted in [62, 63]. Moreover, our results are valid for any range of sharpness parameters and do not require any entanglement. Hence, for experimental implementation of large sequence of detecting quantum correlation with a single two-qubit state, our results are less demanding. To the best of our knowledge, this study points out for the first time that there exist some communication tasks in which unbounded pairs of observers can exhibit quantum supremacy even if a single quantum resource is used. Finding out different communication tasks with the above feature merits further investigation. Next, it is worth to fully characterize the set of two-qubit states for which Theorem 1 holds. It is also interesting to find out whether there exists any two-qubit state for which weak measurements are necessary for satisfying Theorem 1. ## Acknowledgements DD acknowledges fruitful discussions with Somshubhro Bandyopadhyay, Manik Banik and Debashis Saha. DD acknowledges Science and Engineering Research Board (SERB), Government of India for financial support through National Post Doctoral Fellowship (File No.: PDF/2020/001358). AG acknowledges Bose Institute, Kolkata for financial support. SK thanks the Department of Science and Technology (DST), Government of India for the financial assistance through the QuEST project ## References * [1] S. Wiesner, ACM SIGACT News 15, 78 (1983). * [2] A. Ambainis, A. Nayak, A. Ta-Shma, and U. Vazirani, in Proceedings of the 31st Annual ACM Symposium on Theory of Computing (STOC’99) (ACM, New York, 1999), pp. 376–383. * [3] A. Ambainis, A. Nayak, A. Ta-Shma, and U. Vazirani, J. ACM 49, 496 (2002). * [4] M. Pawlowski, and M. Zukowski, Phys. Rev. A 81, 042326 (2010). * [5] T. K. C. Bobby, and T. Paterek, New J. Phys. 16, 093063 (2014). * [6] A. Holevo, Problems Inform. Transmission 9, 177 (1973). * [7] A. Nayak, in 40th Annual Symposium on Foundations of Computer Science (Cat. No. 99CB37039) (IEEE, New York, 1999), pp. 369–376. * [8] H. Klauck, in Proceedings of the IEEE Symposium on Foundations of Computer Science (IEEE Computer Society, NW Washington, DC, 2001), p. 288; in Proceedings of the Thirty-Second Annual ACM Symposium on Theory of Computing (Association for Computing Machinery, New York, 2000), pp. 644–651. * [9] S. Aaronson, Theory of Computing 1, 1 (2005). * [10] D. Gavinsky, J. Kempe, O. Regev, and R. de Wolf, in Proceedings of the 38th Annual ACM Symposium on Theory of Computing (STOC’06) (Association for Computing Machinery, New York, 2006), pp. 594–603. * [11] H. Buhrman, and R. de Wolf, in Proceedings 16th Annual IEEE Conference on Computational Complexity (IEEE Computer Society, Los Alamitos, California, 2001), pp. 120–130. * [12] D. Martínez, A. Tavakoli, M. Casanova, G. Cañas, B. Marques, and G. Lima, Phys. Rev. Lett. 121, 150504 (2018). * [13] S. Muhammad, A. Tavakoli, M. Kurant, M. Pawłowski, M. Żukowski, and M. Bourennane, Phys. Rev. X 4, 021047 (2014). * [14] M. Hayashi, K. Iwama, H. Nishimura, R. Raymond, and S. Yamashita, New J. Phys. 8, 129 (2006). * [15] M. Hayashi, K. Iwama, H. Nishimura, R. Raymond, and S. Yamashita, Proc. 24th Int. Symp. on Theoretical Aspects of Computer Science (STACS 2007), Lecture Notes in Computer Science 4393 pp 610-21 (2007). * [16] I. Kerenidis, and R. de Wolf, J. Comput. Syst. Sci., 69, 395–420, (2004). * [17] S. Wehner, and R. de Wolf, in Automata, Languages and Programming. ICALP 2005, Lecture Notes in Computer Science, Vol. 3580 (Springer, Berlin, Heidelberg, 2005), pp. 1424–1436. * [18] A. Ben-Aroya, O. Regev, and R. de Wolf, in Proceedings of the 49th Annual IEEE Symposium on Foundations of Computer Science (FOCS’08) (IEEE Computer Society, NWWashington, DC, United States, 2008), pp. 477–486. * [19] S. Wehner, M. Christandl, and A. C. Doherty, Phys. Rev. A 78, 062112 (2008). * [20] J. Ahrens, P.Badziąg, M. Pawłowski, M. Żukowski, and M. Bourennane Phys. Rev. Lett. 112, 140401 (2014). * [21] A. Tavakoli, A. Hameedi, B. Marques, and M. Bourennane, Phys. Rev. Lett. 114, 170502 (2015). * [22] M. Czechlewski, D. Saha, A. Tavakoli, and M. Pawłowski Phys. Rev. A 98, 062305 (2018). * [23] S. Aaronson, Proc. R. Soc. London Ser. A 463, 3089 (2007). * [24] A. Tavakoli, J. Kaniewski, T. Vertesi, D. Rosset, and N. Brunner, Phys. Rev. A 98, 062307 (2018). * [25] M. Farkas, and J. Kaniewski, Phys. Rev. A 99, 032316 (2019). * [26] A. Tavakoli, M. Smania, T. Vertesi, N. Brunner, and M. Bourennane, Science Advances 6, 16 (2020). * [27] K. Mohan, A. Tavakoli, and N. Brunner, New J. Phys. 21 083034 (2019). * [28] H.-W. Li, Z.-Q. Yin, Y.-C. Wu, X.-B. Zou, S. Wang, W. Chen, G.-C. Guo, and Z.-F. Han, Phys. Rev. A 84, 034301 (2011). * [29] M. Pawlowski, and N. Brunner, Phys. Rev. A 84, 010302(R) (2011). * [30] A. Grudka, K. Horodecki, M. Horodecki, W. Kłobus, and M. Pawłowski, Phys. Rev. Lett. 113, 100401 (2014); A. Grudka, M. Horodecki, R. Horodecki, and A. Wójcik, Phys. Rev. A 92, 052312 (2015). * [31] M. Pawlowski, T. Paterek, D. Kaszlikowski, V. Scarani, A. Winter, and M. Zukowski, Nature 461, 1101 (2009). * [32] R. W. Spekkens, D. H. Buzacott, A. J. Keehn, B. Toner, and G. J. Pryde, Phys. Rev. Lett. 102, 010401 (2009). * [33] X.-R. Wang, L.-Y. Wu, C.-X. Liu, T.-J. Liu, J. Li, and Q. Wang, Phys. Rev. A 99, 052313 (2019). * [34] R. Silva, N. Gisin, Y. Guryanova, and S. Popescu, Phys. Rev. Lett. 114, 250401 (2015). * [35] S. Mal, A. S. Majumdar, and D. Home, Mathematics 4 48 (2016). * [36] M.-J. Hu, Z.-Y. Zhou, X.-M. Hu, C.-F. Li, G.-C. Guo, and Y.-S. Zhang, npj Quantum Information 4, 63 (2018). * [37] M. Schiavon, L. Calderaro, M. Pittaluga, G. Vallone, and P. Villoresi, Quantum Sci. Technol. 2 015010 (2017). * [38] D. Das, A. Ghosal, S. Sasmal, S. Mal, and A. S. Majumdar, Phys. Rev. A 99, 022305 (2019). * [39] P. J. Brown, and R. Colbeck, Phys. Rev. Lett. 125, 090401 (2020). * [40] J. F. Clauser, M. A. Horne, A. Shimony, and R. A. Holt, Phys. Rev. Lett. 23, 880 (1969). * [41] S. Roy, A. Kumari, S. Mal, and A. Sen De, arXiv:2012.12200 [quant-ph]. * [42] S. Sasmal, D. Das, S. Mal, and A. S. Majumdar, Phys. Rev. A 98, 012305 (2018). * [43] A. Shenoy H., S. Designolle, F. Hirsch, R. Silva, N. Gisin, and N. Brunner, Phys. Rev. A 99, 022317 (2019). * [44] Y.-H. Choi, S. Hong, T. Pramanik, H.-T. Lim, Y.-S. Kim, H. Jung, S.-W. Han, S. Moon, and Y.-W. Cho, Optica 7, 675 (2020). * [45] A. Bera, S. Mal, A. Sen De, and U. Sen, Phys. Rev. A 98, 062304 (2018). * [46] G. Foletto, L. Calderaro, A. Tavakoli, M. Schiavon, F. Picciariello, A. Cabello, P. Villoresi, and G. Vallone, Phys. Rev. Applied 13, 044008 (2020). * [47] C. Srivastava, S. Mal, A. Sen De, and U. Sen, Phys. Rev. A 103, 032408 (2021). * [48] S. Datta, and A. S. Majumdar, Phys. Rev. A 98, 042311 (2018). * [49] C. Ren, T. Feng, D. Yao, H. Shi, J. Chen, and X. Zhou, Phys. Rev. A 100, 052121 (2019). * [50] T. Feng, C. Ren, Y. Tian, M. Luo, H. Shi, J. Chen, and X. Zhou, Phys. Rev. A 102, 032220 (2020). * [51] H. Anwer, N.Wilson, R. Silva, S. Muhammad, A. Tavakoli, and M. Bourennane, Quantum 5, 551 (2021). * [52] A. Kumari, and A. K. Pan, Phys. Rev. A 100, 062130 (2019). * [53] R. D. Baldijao, and M. Terra Cunha, Phys. Rev. A 102, 052226 (2020). * [54] S. Saha, D. Das, S. Sasmal, D. Sarkar, K. Mukherjee, A. Roy, and S. S. Bhattacharya, Quantum Inf Process 18, 42 (2019). * [55] A. G. Maity, D. Das, A. Ghosal, A. Roy, and A. S. Majumdar, Phys. Rev. A 101, 042340 (2020). * [56] S. Gupta, A. G. Maity, D. Das, A. Roy, and A. S. Majumdar, Phys. Rev. A 103, 022421 (2021).. * [57] F. J. Curchod, M. Johansson, R. Augusiak, M. J. Hoban, P. Wittek, and A. Acin, Phys. Rev. A 95, 020102(R) (2017). * [58] A. Tavakoli, and A. Cabello, Phys. Rev. A 97, 032131 (2018). * [59] H.-W. Li, Y.-S. Zhang, X.-B. An, Z.-F. Han, and G.-C. Guo, Commun. Phys. 1, 10 (2018). * [60] X.-B. An, H.-W. Li, Z.-Q. Yin, M.-J. Hu, W. Huang, B.-J. Xu, S. Wang, W. Chen, G.-C. Guo, and Z.-F. Han, Opt. Lett. 43, 3437 (2018). * [61] S. Roy, A. Bera, S. Mal, A. Sen De, and U. Sen, Phys. Lett. A 392, 127143 (2021). * [62] H. Anwer, S. Muhammad, W. Cherifi, N. Miklin, A. Tavakoli, and M. Bourennane, Phys. Rev. Lett. 125, 080403 (2020). * [63] G. Foletto, L. Calderaro, G. Vallone, and P. Villoresi, Phys. Rev. Research 2, 033205 (2020). * [64] S. Lloyd, Phys. Rev. Lett. 88, 237901 (2002). * [65] D. Das, B. Bhattacharya, C. Datta, A. Roy, C. Jebaratnam, A. S. Majumdar, and R. Srikanth, Phys. Rev. A 97, 062335 (2018). * [66] C. Jebarathinam, D. Das, S. Kanjilal, R. Srikanth, D. Sarkar, I. Chattopadhyay, and A. S. Majumdar, Phys. Rev. A 100, 012344 (2019). * [67] P. Busch, M. Grabowski, and P. J. Lathi, _Operational Quantum Physics_ (Springer-Verlag, Berlin, 1997). * [68] H. Ollivier, and W. H. Zurek, Phys. Rev. Lett. 88, 017901 (2001). * [69] B. Dakic, Y. O. Lipp, X. Ma, M. Ringbauer, S. Kropatschek, S. Barz, T. Paterek, V. Vedral, A. Zeilinger, C. Brukner, and P. Walther, Nat. Phys. 8, 666 (2012). * [70] G. Brassard, Foundations of Physics 33, 1593-1616 (2003). * [71] R. Cleve, and H. Buhrman, Phys. Rev. A 56, 1201 (1997). ## Appendix A Proof of Lemma 1 For the $2\rightarrow 1$ RAC: Let Alicek-Bobk shares the following Bell- diagonal two-qubit state, $\rho^{k}_{AB}=\frac{1}{4}(\mathbf{I}_{4}+\sum_{i=1}^{3}t^{k}_{ii}\,\sigma_{i}\otimes\sigma_{i}),$ (12) with $(t^{k}_{uu})^{2}\geq(t^{k}_{vv})^{2}\geq(t^{k}_{ww})^{2}$ for an arbitrary choice of $u\neq v\neq w\in\\{1,2,3\\}$. Alicek and Bobk perform the $2\rightarrow 1$ RAC task contingent upon using the unsharp measurements mentioned in Eqs.(3,4,5,6). Next, let us compute the expression for a typical guessing probability $P(a^{k}_{x_{0}x_{1}}\oplus b^{k}_{y}=x_{y})$. Using Born’s rule, one can write that $\displaystyle P(a^{k}_{x_{0}x_{1}}\oplus b^{k}_{y}=x_{y})$ $\displaystyle=\sum_{z=0}^{1}P\left(a^{k}_{x_{0}x_{1}}=z,b^{k}_{y}=|x_{y}-z|\,\,\Big{|}\,\,A^{k}_{x_{0}x_{1}},B^{k}_{y}\right),$ (13) where $P\left(a^{k}_{x_{0}x_{1}}=z,b^{k}_{y}=|x_{y}-z|\,\,\Big{|}\,\,A^{k}_{x_{0}x_{1}},B^{k}_{y}\right)$ denotes the joint probability with which Alicek and Bobk get the outcomes $z$ and $|x_{y}-z|$ contingent upon performing the measurements $A^{k}_{x_{0}x_{1}}$ and $B^{k}_{y}$ respectively. From Eq.(13), we have the following, $\displaystyle P(a^{k}_{x_{0}x_{1}}\oplus b^{k}_{y}=x_{y})$ $\displaystyle=\sum_{z=0}^{1}\text{Tr}\left[\rho^{k}_{AB}\left(E^{k,\,z}_{x_{0}x_{1}}\otimes E^{k,\,|x_{y}-z|}_{y}\right)\right]$ $\displaystyle=\frac{1}{2}\left[1+(-1)^{x_{y}}\,\lambda^{k}\eta^{k}\,\left(t_{(y+1)\,(y+1)}\right)\,\left(\hat{u}_{x_{0}x_{1}}^{k}\cdot\hat{v}_{y}^{k}\right)\right],$ (14) where $\hat{u}^{k}_{x_{0}x_{1}}$ and $\hat{v}^{k}_{y}$ are mentioned in Eqs.(5,6). Now, a straightforward calculation leads to the following, $P(a^{k}_{00}\oplus b^{k}_{0}=0)=P(a^{k}_{01}\oplus b^{k}_{0}=0)=P(a^{k}_{10}\oplus b^{k}_{0}=1)=P(a^{k}_{11}\oplus b^{k}_{0}=1)=\frac{1}{2}\left[1+\lambda^{k}\eta^{k}\frac{(t^{k}_{11})^{2}}{\sqrt{(t^{k}_{11})^{2}+(t^{k}_{22})^{2}}}\right],$ (15a) $P(a^{k}_{00}\oplus b^{k}_{1}=0)=P(a^{k}_{01}\oplus b^{k}_{1}=1)=P(a^{k}_{10}\oplus b^{k}_{1}=0)=P(a^{k}_{11}\oplus b^{k}_{1}=1)=\frac{1}{2}\left[1+\lambda^{k}\eta^{k}\frac{(t^{k}_{22})^{2}}{\sqrt{(t^{k}_{11})^{2}+(t^{k}_{22})^{2}}}\right].$ (15b) Hence, the minimum success probability is given by, $P_{\texttt{Min}}^{2\rightarrow 1}=\frac{1}{2}\left[1+\lambda^{k}\eta^{k}\frac{\min\left[(t^{k}_{11})^{2},(t^{k}_{22})^{2}\right]}{\sqrt{(t^{k}_{11})^{2}+(t^{k}_{22})^{2}}}\right].$ (16) Thus, for all values of $\lambda^{k}$ $\in$ $(0,1]$ and $\eta^{k}$ $\in$ $(0,1]$, $P_{\texttt{Min}}^{2\rightarrow 1}>\frac{1}{2}$ if $\min\left[(t^{k}_{11})^{2},(t^{k}_{22})^{2}\right]\neq 0$. For the $3\rightarrow 1$ RAC: Let Alicek-Bobk performs the $3\rightarrow 1$ RAC task using the shared Bell-diagonal state given by Eq. (12). Alicek and Bobk perform the unsharp measurements given by Eqs.(7,8,9,10) of the main paper. The expression for a typical guessing probability $P(a^{k}_{x_{0}x_{1}x_{2}}\oplus b^{k}_{y}=x_{y})$ can be calculated using Born’s rule as follows $\displaystyle P(a^{k}_{x_{0}x_{1}x_{2}}\oplus b^{k}_{y}=x_{y})$ $\displaystyle=\sum_{z=0}^{1}\text{Tr}\left[\rho^{k}_{AB}\left(E^{k,\,z}_{x_{0}x_{1}x_{2}}\otimes E^{k,\,|x_{y}-z|}_{y}\right)\right]$ $\displaystyle=\frac{1}{2}\left[1+(-1)^{x_{y}}\,\lambda^{k}\eta^{k}\,\left(t_{(y+1)\,(y+1)}\right)\,\left(\hat{u}_{x_{0}x_{1}x_{2}}^{k}\cdot\hat{v}_{y}^{k}\right)\right],$ (17) where $\hat{u}^{k}_{x_{0}x_{1}x_{2}}$ and $\hat{v}^{k}_{y}$ are mentioned earlier in Eqs.(9,10) of the main paper. Hence, we have the following, $\displaystyle P(a^{k}_{000}\oplus b^{k}_{0}=0)=P(a^{k}_{001}\oplus b^{k}_{0}=0)=P(a^{k}_{010}\oplus b^{k}_{0}=0)=P(a^{k}_{011}\oplus b^{k}_{0}=0)=\frac{1}{2}\left[1+\lambda^{k}\eta^{k}\frac{(t^{k}_{11})^{2}}{\sqrt{(t^{k}_{11})^{2}+(t^{k}_{22})^{2}+(t^{k}_{33})^{2}}}\right],$ $\displaystyle P(a^{k}_{100}\oplus b^{k}_{0}=1)=P(a^{k}_{101}\oplus b^{k}_{0}=1)=P(a^{k}_{110}\oplus b^{k}_{0}=1)=P(a^{k}_{111}\oplus b^{k}_{0}=1)=\frac{1}{2}\left[1+\lambda^{k}\eta^{k}\frac{(t^{k}_{11})^{2}}{\sqrt{(t^{k}_{11})^{2}+(t^{k}_{22})^{2}+(t^{k}_{33})^{2}}}\right],$ $\displaystyle P(a^{k}_{000}\oplus b^{k}_{1}=0)=P(a^{k}_{001}\oplus b^{k}_{1}=0)=P(a^{k}_{010}\oplus b^{k}_{1}=1)=P(a^{k}_{011}\oplus b^{k}_{1}=1)=\frac{1}{2}\left[1+\lambda^{k}\eta^{k}\frac{(t^{k}_{22})^{2}}{\sqrt{(t^{k}_{11})^{2}+(t^{k}_{22})^{2}+(t^{k}_{33})^{2}}}\right],$ $\displaystyle P(a^{k}_{100}\oplus b^{k}_{1}=0)=P(a^{k}_{101}\oplus b^{k}_{1}=0)=P(a^{k}_{110}\oplus b^{k}_{1}=1)=P(a^{k}_{111}\oplus b^{k}_{1}=1)=\frac{1}{2}\left[1+\lambda^{k}\eta^{k}\frac{(t^{k}_{22})^{2}}{\sqrt{(t^{k}_{11})^{2}+(t^{k}_{22})^{2}+(t^{k}_{33})^{2}}}\right],$ $\displaystyle P(a^{k}_{000}\oplus b^{k}_{2}=0)=P(a^{k}_{001}\oplus b^{k}_{2}=1)=P(a^{k}_{010}\oplus b^{k}_{2}=0)=P(a^{k}_{011}\oplus b^{k}_{2}=1)=\frac{1}{2}\left[1+\lambda^{k}\eta^{k}\frac{(t^{k}_{33})^{2}}{\sqrt{(t^{k}_{11})^{2}+(t^{k}_{22})^{2}+(t^{k}_{33})^{2}}}\right],$ $\displaystyle P(a^{k}_{100}\oplus b^{k}_{2}=0)=P(a^{k}_{101}\oplus b^{k}_{2}=1)=P(a^{k}_{110}\oplus b^{k}_{2}=0)=P(a^{k}_{111}\oplus b^{k}_{2}=1)=\frac{1}{2}\left[1+\lambda^{k}\eta^{k}\frac{(t^{k}_{33})^{2}}{\sqrt{(t^{k}_{11})^{2}+(t^{k}_{22})^{2}+(t^{k}_{33})^{2}}}\right].$ (18) From the above equations, the minimum success probability is given by, $P_{\texttt{Min}}^{3\rightarrow 1}=\frac{1}{2}\left[1+\lambda^{k}\eta^{k}\frac{\min\left[(t^{k}_{11})^{2},(t^{k}_{22})^{2},(t^{k}_{33})^{2}\right]}{\sqrt{(t^{k}_{11})^{2}+(t^{k}_{22})^{2}+(t^{k}_{33})^{2}}}\right].$ (19) Therefore, for all values of $\lambda^{k}$ $\in$ $(0,1]$ and $\eta^{k}$ $\in$ $(0,1]$, $P_{\texttt{Min}}^{3\rightarrow 1}>\frac{1}{2}$ if $\min\left[(t^{k}_{11})^{2},(t^{k}_{22})^{2},(t^{k}_{33})^{2}\right]\neq 0$. ## Appendix B Calculating $\rho^{k+1}_{AB}$ in case of $2\rightarrow 1$ RAC Let Alicek-Bobk performs the $2\rightarrow 1$ RAC task with the following Bell-diagonal two-qubit state, $\rho^{k}_{AB}=\frac{1}{4}(\mathbf{I}_{4}+\sum_{i=1}^{3}t^{k}_{ii}\,\sigma_{i}\otimes\sigma_{i}),$ (20) where $(t^{k}_{uu})^{2}\geq(t^{k}_{vv})^{2}\geq(t^{k}_{ww})^{2}$ for an arbitrary choice of $u\neq v\neq w\in\\{1,2,3\\}$. Alicek and Bobk perform the aforementioned unsharp measurements. The average post-measurement state $\rho_{AB}^{k+1}$ received by Alicek+1-Bobk+1 from Alicek-Bobk can be obtained using the generalized von Neumann-Lüder’s transformation rule as follows, $\displaystyle\rho^{k+1}_{AB}=\frac{1}{8}\sum_{x_{0},x_{1},y=0}^{1}$ $\displaystyle\Bigg{[}\sum_{a^{k}_{x_{0}x_{1}},b^{k}_{y}=0}^{1}\Bigg{(}\sqrt{E^{k,\,a^{k}_{x_{0}x_{1}}}_{x_{0}x_{1}}}\otimes\sqrt{E^{k,\,b^{k}_{y}}_{y}}\Bigg{)}\rho^{k}_{AB}\Bigg{(}\sqrt{E^{k,\,a^{k}_{x_{0}x_{1}}}_{x_{0}x_{1}}}\otimes\sqrt{E^{k,\,b^{k}_{y}}_{y}}\Bigg{)}^{\dagger}\Bigg{]},$ (21) where $\sqrt{E^{k,\,a^{k}_{x_{0}x_{1}}}_{x_{0}x_{1}}}=\frac{\sqrt{1+\lambda^{k}}}{2\sqrt{2}}\left[\mathbf{I}_{2}+(-1)^{a^{k}_{x_{0}x_{1}}}\,\left(\hat{u}^{k}_{x_{0}x_{1}}\cdot\vec{\sigma}\right)\right]+\frac{\sqrt{1-\lambda^{k}}}{2\sqrt{2}}\left[\mathbf{I}_{2}-(-1)^{a^{k}_{x_{0}x_{1}}}\,\left(\hat{u}^{k}_{x_{0}x_{1}}\cdot\vec{\sigma}\right)\right],$ (22) and $\sqrt{E^{k,\,b^{k}_{y}}_{y}}=\frac{\sqrt{1+\eta^{k}}}{2\sqrt{2}}\left[\mathbf{I}_{2}+(-1)^{b^{k}_{y}}\,\left(\hat{v}^{k}_{y}\cdot\vec{\sigma}\right)\right]+\frac{\sqrt{1-\eta^{k}}}{2\sqrt{2}}\left[\mathbf{I}_{2}-(-1)^{b^{k}_{y}}\,\left(\hat{v}^{k}_{y}\cdot\vec{\sigma}\right)\right],$ (23) with $\hat{u}^{k}_{x_{0}x_{1}}$ and $\hat{v}^{k}_{y}$ being mentioned in Eqs.(5,6). Using Eqs.(21-23), one has the following, $\rho^{k+1}_{AB}=\frac{1}{4}(\mathbf{I}_{4}+\sum_{i=1}^{3}t^{k+1}_{ii}\,\sigma_{i}\otimes\sigma_{i}),$ (24) with $\displaystyle t^{k+1}_{11}$ $\displaystyle=\frac{t^{k}_{11}\,\left[1+\sqrt{1-(\eta^{k})^{2}}\right]\left[(t^{k}_{11})^{2}+(t^{k}_{22})^{2}\sqrt{1-(\lambda^{k})^{2}}\right]}{2\left[(t^{k}_{11})^{2}+(t^{k}_{22})^{2}\right]},$ $\displaystyle t^{k+1}_{22}$ $\displaystyle=\frac{t^{k}_{22}\,\left[1+\sqrt{1-(\eta^{k})^{2}}\right]\left[(t^{k}_{22})^{2}+(t^{k}_{11})^{2}\sqrt{1-(\lambda^{k})^{2}}\right]}{2\left[(t^{k}_{11})^{2}+(t^{k}_{22})^{2}\right]},$ $\displaystyle t^{k+1}_{33}$ $\displaystyle=t^{k}_{33}\,\sqrt{1-(\eta^{k})^{2}}\sqrt{1-(\lambda^{k})^{2}}.$ (25) Hence, the average post-measurement state $\rho^{k+1}_{AB}$ is Bell-diagonal. Further, we have $\min\left[(t^{k+1}_{11})^{2},(t^{k+1}_{22})^{2}\right]\neq 0$ if $\min\left[(t^{k}_{11})^{2},(t^{k}_{22})^{2}\right]\neq 0$ for all possible values of $\lambda^{k}$ $\in$ $(0,1]$ and $\eta^{k}$ $\in$ $(0,1]$. In particular, when $\lambda^{k}=\eta^{k}=1$, we have $\displaystyle t^{k+1}_{11}$ $\displaystyle=\frac{(t^{k}_{11})^{3}}{2\left[(t^{k}_{11})^{2}+(t^{k}_{22})^{2}\right]},$ $\displaystyle t^{k+1}_{22}$ $\displaystyle=\frac{(t^{k}_{22})^{3}}{2\left[(t^{k}_{11})^{2}+(t^{k}_{22})^{2}\right]},$ $\displaystyle t^{k+1}_{33}$ $\displaystyle=0.$ (26) Therefore, the state $\rho^{k+1}_{AB}$ remains to be Bell-diagonal with $\min\left[(t^{k+1}_{11})^{2},(t^{k+1}_{22})^{2}\right]\neq 0$ if $\min\left[(t^{k}_{11})^{2},(t^{k}_{22})^{2}\right]\neq 0$ even when Alicek- Bobk performs projective measurements. ## Appendix C Calculating $\rho^{k+1}_{AB}$ in case of $3\rightarrow 1$ RAC Let Alicek-Bobk performs the $3\rightarrow 1$ RAC task with the following Bell-diagonal two-qubit state, $\rho^{k}_{AB}=\frac{1}{4}(\mathbf{I}_{4}+\sum_{i=1}^{3}t^{k}_{ii}\,\sigma_{i}\otimes\sigma_{i}),$ (27) where $(t^{k}_{uu})^{2}\geq(t^{k}_{vv})^{2}\geq(t^{k}_{ww})^{2}$ for an arbitrary choice of $u\neq v\neq w\in\\{1,2,3\\}$. Alicek and Bobk perform the unsharp measurements mentioned in Eqs.(7,8,9,10). The average post-measurement state $\rho_{AB}^{k+1}$ received by Alicek+1-Bobk+1 from Alicek-Bobk can be obtained using the generalized von Neumann-Lüder’s transformation rule and it is given by, $\displaystyle\rho^{k+1}_{AB}=\frac{1}{24}\sum_{x_{0},x_{1},x_{2}=0}^{1}\,\,\,\,\,\sum_{y=0}^{1}$ $\displaystyle\Bigg{[}\sum_{a^{k}_{x_{0}x_{1}x_{2}}=0}^{1}\,\,\,\,\,\,\sum_{b^{k}_{y}=0}^{1}\Bigg{(}\sqrt{E^{k,\,a^{k}_{x_{0}x_{1}x_{2}}}_{x_{0}x_{1}x_{2}}}\otimes\sqrt{E^{k,\,b^{k}_{y}}_{y}}\Bigg{)}\rho^{k}_{AB}\Bigg{(}\sqrt{E^{k,\,a^{k}_{x_{0}x_{1}x_{2}}}_{x_{0}x_{1}x_{2}}}\otimes\sqrt{E^{k,\,b^{k}_{y}}_{y}}\Bigg{)}^{\dagger}\Bigg{]},$ (28) where $\sqrt{E^{k,\,a^{k}_{x_{0}x_{1}x_{2}}}_{x_{0}x_{1}x_{2}}}=\frac{\sqrt{1+\lambda^{k}}}{2\sqrt{2}}\left[\mathbf{I}_{2}+(-1)^{a^{k}_{x_{0}x_{1}x_{2}}}\,\left(\hat{u}^{k}_{x_{0}x_{1}x_{2}}\cdot\vec{\sigma}\right)\right]+\frac{\sqrt{1-\lambda^{k}}}{2\sqrt{2}}\left[\mathbf{I}_{2}-(-1)^{a^{k}_{x_{0}x_{1}x_{2}}}\,\left(\hat{u}^{k}_{x_{0}x_{1}x_{2}}\cdot\vec{\sigma}\right)\right],$ (29) and $\sqrt{E^{k,\,b^{k}_{y}}_{y}}=\frac{\sqrt{1+\eta^{k}}}{2\sqrt{2}}\left[\mathbf{I}_{2}+(-1)^{b^{k}_{y}}\,\left(\hat{v}^{k}_{y}\cdot\vec{\sigma}\right)\right]+\frac{\sqrt{1-\eta^{k}}}{2\sqrt{2}}\left[\mathbf{I}_{2}-(-1)^{b^{k}_{y}}\,\left(\hat{v}^{k}_{y}\cdot\vec{\sigma}\right)\right]$ (30) with $\hat{u}^{k}_{x_{0}x_{1}x_{2}}$ and $\hat{v}^{k}_{y}$ are mentioned in Eq.(9) and Eq.(10) respectively. Using Eqs.(28-30), we have the following, $\rho^{k+1}_{AB}=\frac{1}{4}(\mathbf{I}_{4}+\sum_{i=1}^{3}t^{k+1}_{ii}\,\sigma_{i}\otimes\sigma_{i}),$ (31) with $\displaystyle t^{k+1}_{11}$ $\displaystyle=\frac{t^{k}_{11}\,\left[1+2\sqrt{1-(\eta^{k})^{2}}\right]\left[(t^{k}_{11})^{2}+\left[(t^{k}_{22})^{2}+(t^{k}_{33})^{2}\right]\sqrt{1-(\lambda^{k})^{2}}\right]}{3\left[(t^{k}_{11})^{2}+(t^{k}_{22})^{2}+(t^{k}_{33})^{2}\right]},$ $\displaystyle t^{k+1}_{22}$ $\displaystyle=\frac{t^{k}_{22}\,\left[1+2\sqrt{1-(\eta^{k})^{2}}\right]\left[(t^{k}_{22})^{2}+\left[(t^{k}_{33})^{2}+(t^{k}_{11})^{2}\right]\sqrt{1-(\lambda^{k})^{2}}\right]}{3\left[(t^{k}_{11})^{2}+(t^{k}_{22})^{2}+(t^{k}_{33})^{2}\right]},$ $\displaystyle t^{k+1}_{33}$ $\displaystyle=\frac{t^{k}_{33}\,\left[1+2\sqrt{1-(\eta^{k})^{2}}\right]\left[(t^{k}_{33})^{2}+\left[(t^{k}_{11})^{2}+(t^{k}_{22})^{2}\right]\sqrt{1-(\lambda^{k})^{2}}\right]}{3\left[(t^{k}_{11})^{2}+(t^{k}_{22})^{2}+(t^{k}_{33})^{2}\right]}.$ (32) Hence, the average post-measurement state $\rho^{k+1}_{AB}$ is Bell-diagonal. Furthermore, we notice that $\min\left[(t^{k+1}_{11})^{2},(t^{k+1}_{22})^{2},(t^{k+1}_{33})^{2}\right]\neq 0$ if $\min\left[(t^{k}_{11})^{2},(t^{k}_{22})^{2},(t^{k}_{33})^{2}\right]\neq 0$ for all possible values of $\lambda^{k}$ $\in$ $(0,1]$ and $\eta^{k}$ $\in$ $(0,1]$. When $\lambda^{k}=\eta^{k}=1$, we have $\displaystyle t^{k+1}_{11}$ $\displaystyle=\frac{(t^{k}_{11})^{3}}{3\left[(t^{k}_{11})^{2}+(t^{k}_{22})^{2}+(t^{k}_{33})^{2}\right]},$ $\displaystyle t^{k+1}_{22}$ $\displaystyle=\frac{(t^{k}_{22})^{3}}{3\left[(t^{k}_{11})^{2}+(t^{k}_{22})^{2}+(t^{k}_{33})^{2}\right]},$ $\displaystyle t^{k+1}_{33}$ $\displaystyle=\frac{(t^{k}_{33})^{3}}{3\left[(t^{k}_{11})^{2}+(t^{k}_{22})^{2}+(t^{k}_{33})^{2}\right]}.$ (33) Therefore, the state $\rho^{k+1}_{AB}$ remains to be Bell-diagonal with $\min\left[(t^{k+1}_{11})^{2},(t^{k+1}_{22})^{2},(t^{k+1}_{33})^{2}\right]\neq 0$ if $\min\left[(t^{k}_{11})^{2},(t^{k}_{22})^{2},(t^{k}_{33})^{2}\right]\neq 0$ even when Alicek-Bobk performs projective measurements. For $\rho_{AB}^{1}=|\psi^{-}\rangle\langle\psi^{-}|$ --- Alicek-Bobk | $\lambda^{k}$ | $\eta^{k}$ | $P_{\texttt{Min}}^{2\rightarrow 1}$ | Alicek-Bobk | $\lambda^{k}$ | $\eta^{k}$ | $P_{\texttt{Min}}^{2\rightarrow 1}$ with $k=$ | | | | with $k=$ | | | $1$ | $1$ | $1$ | $0.854$ | $1$ | $0.340$ | $0.340$ | $0.541$ $2$ | $1$ | $1$ | $0.588$ | $2$ | $1$ | $1$ | $0.833$ $3$ | $1$ | $1$ | $0.522$ | $3$ | $1$ | $1$ | $0.583$ $4,5,6,\cdots$ | $1$ | $1$ | $\frac{1}{2}<P_{\texttt{Min}}^{2\rightarrow 1}<0.520$ | $4$ | $1$ | $1$ | $0.521$ | $5,6,7,\cdots$ | $1$ | $1$ | $\frac{1}{2}<P_{\texttt{Min}}^{2\rightarrow 1}<0.520$ For $\rho_{AB}^{1}=|\psi^{-}\rangle\langle\psi^{-}|$ Alicek-Bobk | $\lambda^{k}$ | $\eta^{k}$ | $P_{\texttt{Min}}^{2\rightarrow 1}$ | Alicek-Bobk | $\lambda^{k}$ | $\eta^{k}$ | $P_{\texttt{Min}}^{2\rightarrow 1}$ with $k=$ | | | | with $k=$ | | | $1$ | $0.340$ | $0.340$ | $0.541$ | $1$ | $0.340$ | $0.340$ | $0.541$ $2$ | $0.340$ | $0.340$ | $0.538$ | $2$ | $0.340$ | $0.340$ | $0.538$ $3$ | $1$ | $1$ | $0.813$ | $3$ | $0.340$ | $0.340$ | $0.536$ $4$ | $1$ | $1$ | $0.578$ | $4$ | $0.340$ | $0.340$ | $0.534$ $5$ | $1$ | $1$ | $0.520$ | $5$ | $0.340$ | $0.340$ | $0.532$ $6,7,8,\cdots$ | $1$ | $1$ | $\frac{1}{2}<P_{\texttt{Min}}^{2\rightarrow 1}<0.520$ | $6$ | $0.340$ | $0.340$ | $0.530$ | $7$ | $0.340$ | $0.340$ | $0.528$ | $8$ | $0.340$ | $0.340$ | $0.527$ | $9$ | $0.340$ | $0.340$ | $0.525$ | $10$ | $0.340$ | $0.340$ | $0.524$ | $11$ | $0.340$ | $0.340$ | $0.522$ | $12$ | $0.340$ | $0.340$ | $0.521$ | $13$ | $0.340$ | $0.340$ | $0.520$ | $14,15,16,\cdots$ | $0.340$ | $0.340$ | $\frac{1}{2}<P_{\texttt{Min}}^{2\rightarrow 1}<0.520$ Table 1: Minimum success probabilities in $2\rightarrow 1$ RAC task (assisted with two bits, shared from a common source and having maximally mixed marginal at the receiver’s end) by different consecutive pairs for different choices of $\lambda^{k}$, $\eta^{k}$ ($k\in\\{1,2,\cdots\\}$). In each of the above tables, the initially shared two-qubit state is assumed to be singlet state, $\rho_{AB}^{1}=|\psi^{-}\rangle\langle\psi^{-}|$. Here, all the numerical values are rounded to the third decimal places. ## Appendix D Trade-off between the amount of quantum advantage gained by an arbitrary pair and the number of subsequent pairs exhibiting considerable amount of quantum advantage We have shown that an unbounded pairs of Alice and Bob can, in principle, demonstrate quantum advantage in some particular $n\rightarrow 1$ RAC task (with $n=2$ or $n=3$). Here, quantum advantage implies that the magnitude of $(P_{\texttt{Min}}^{n\rightarrow 1}-\frac{1}{2})$ is positive. However, for experimental implementation, the magnitude of $(P_{\texttt{Min}}^{n\rightarrow 1}-\frac{1}{2})$ should not only be positive, but also be large enough to be detected in real experiment. In order to explore this issue relevant for practical realization, we consider that the quantum advantage is ‘significant’ if $P_{\texttt{Min}}^{n\rightarrow 1}\geq 0.520$. Note, here the bound $0.520$ is chosen as an example. One can choose any other bound depending on the precision of the experimental apparatus and the characteristics of the results described below will not change. We observe that when an arbitrary pair gains a large amount of quantum advantage by performing projective measurements or unsharp measurements with the sharpness parameters being close to unity, then only few number of subsequent pairs will get ‘significant’ quantum advantage. On the other hand, when a pair gets a small amount of quantum advantage by performing unsharp measurements with the sharpness parameters being much less than unity, a larger number of subsequent pairs can achieve ‘significant’ quantum advantage. We will describe this aspect through a number of examples. Let us consider that the singlet state is initially shared by the pair Alice1-Bob1 and each of the multiple pairs performs $2\rightarrow 1$ RAC task assisted with two bits, shared from a common source and having maximally mixed marginal at the receiver’s end. The results in this case are presented in Table 1. From this table, we observe that at most three consecutive pairs can show $P_{\texttt{Min}}^{2\rightarrow 1}\geq 0.520$ when all observers perform projective measurements. Next, consider the case when the first pair performs measurements with $\lambda^{1}=0.340$ and $\eta^{1}=0.340$ and all other pairs perform projective measurements. In this case, the minimum success probability for the first pair is reduced compared to the previous case (with $\lambda^{1}=1$ and $\eta^{1}=1$), but the maximum number of pairs exhibiting $P_{\texttt{Min}}^{2\rightarrow 1}\geq 0.520$ is increased to four. Now, consider that the first and second pairs perform unsharp measurements with $\lambda^{1}=0.340$, $\eta^{1}=0.340$, $\lambda^{2}=0.340$, $\eta^{2}=0.340$ and all other pairs perform projective measurements. In this case, the minimum success probability for the second pair is reduced compared to the case with $\lambda^{2}=1$ and $\eta^{2}=1$, and the maximum number of pairs exhibiting $P_{\texttt{Min}}^{2\rightarrow 1}\geq 0.520$ is further increased to five. Proceeding in this way, we find out that when all the pairs perform measurements with $\lambda^{k}=0.340$ and $\eta^{k}=0.340$ ($k\in\\{1,2,\cdots\\}$), then the maximum number of pairs exhibiting $P_{\texttt{Min}}^{2\rightarrow 1}\geq 0.520$ is increased to thirteen. Next, let us focus on the $3\rightarrow 1$ RAC task assisted with two bits shared from a common source. Here also, consider that the singlet state is initially shared by the pair Alice1-Bob1. The results in this case are presented in Table 2. From this table, it can be noticed that at most two consecutive pairs can achieve $P_{\texttt{Min}}^{3\rightarrow 1}\geq 0.520$ contingent upon performing projective measurements by all pairs. Next, consider that the first pair performs unsharp measurements associated with $\lambda^{1}=0.370$ and $\eta^{1}=0.370$ and each of the other pairs performs projective measurements. In this case, we observe that the minimum success probability for the first pair is reduced compared to the previous case with $\lambda^{1}=1$ and $\eta^{1}=1$, but the maximum number of pairs exhibiting $P_{\texttt{Min}}^{3\rightarrow 1}\geq 0.520$ is increased to three. Next, we consider that the first pair and second pair perform unsharp measurements with $\lambda^{1}=0.370$, $\eta^{1}=0.370$ and $\lambda^{2}=0.370$, $\eta^{2}=0.370$ respectively. All other pairs perform projective measurements. In this case, the minimum success probability for the second pair is decreased compared to the case with $\lambda^{2}=1$ and $\eta^{2}=1$. On the other hand, the maximum number of pairs exhibiting $P_{\texttt{Min}}^{3\rightarrow 1}\geq 0.520$ is further increased to four. Following this approach, we find out that when all the pairs perform unsharp measurements with $\lambda^{k}=0.370$ and $\eta^{k}=0.370$ for all $k\in\\{1,2,\cdots\\}$, the maximum number of pairs exhibiting $P_{\texttt{Min}}^{3\rightarrow 1}\geq 0.520$ becomes eight. It can be easily checked that the above aspects remain unaltered when the sequential execution of the aforementioned $n\rightarrow 1$ RAC task (with $n=2$ or $n=3$) is initiated with an appropriate Bell-diagonal two-qubit (entangled or separable) state given by Eq.(2) with $k=1$ and $\min\limits_{i\leq n}\left[(t^{1}_{ii})^{2}\right]\neq 0$ and for different choices of the sharpness parameters. Therefore, we can conclude that there may exist a trade-off relation between the amount of quantum advantage gained by an arbitrary pair and the number of subsequent pairs exhibiting $P_{\texttt{Min}}^{n\rightarrow 1}\geq 0.520$. Importantly, any of these two quantities can be increased at the expense of the other by suitably choosing the sharpness parameters associated with the measurements. For $\rho_{AB}^{1}=|\psi^{-}\rangle\langle\psi^{-}|$ --- Alicek-Bobk | $\lambda^{k}$ | $\eta^{k}$ | $P_{\texttt{Min}}^{3\rightarrow 1}$ | Alicek-Bobk | $\lambda^{k}$ | $\eta^{k}$ | $P_{\texttt{Min}}^{3\rightarrow 1}$ with $k=$ | | | | with $k=$ | | | $1$ | $1$ | $1$ | $0.789$ | $1$ | $0.370$ | $0.370$ | $0.540$ $2$ | $1$ | $1$ | $0.532$ | $2$ | $1$ | $1$ | $0.762$ $3,4,5,\cdots$ | $1$ | $1$ | $\frac{1}{2}<P_{\texttt{Min}}^{3\rightarrow 1}<0.520$ | $3$ | $1$ | $1$ | $0.529$ | $4,5,6,\cdots$ | $1$ | $1$ | $\frac{1}{2}<P_{\texttt{Min}}^{3\rightarrow 1}<0.520$ For $\rho_{AB}^{1}=|\psi^{-}\rangle\langle\psi^{-}|$ Alicek-Bobk | $\lambda^{k}$ | $\eta^{k}$ | $P_{\texttt{Min}}^{3\rightarrow 1}$ | Alicek-Bobk | $\lambda^{k}$ | $\eta^{k}$ | $P_{\texttt{Min}}^{3\rightarrow 1}$ with $k=$ | | | | with $k=$ | | | $1$ | $0.370$ | $0.370$ | $0.540$ | $1$ | $0.370$ | $0.370$ | $0.540$ $2$ | $0.370$ | $0.370$ | $0.536$ | $2$ | $0.370$ | $0.370$ | $0.536$ $3$ | $1$ | $1$ | $0.738$ | $3$ | $0.370$ | $0.370$ | $0.532$ $4$ | $1$ | $1$ | $0.526$ | $4$ | $0.370$ | $0.370$ | $0.530$ $5,6,7,\cdots$ | $1$ | $1$ | $\frac{1}{2}<P_{\texttt{Min}}^{3\rightarrow 1}<0.520$ | $5$ | $0.370$ | $0.370$ | $0.527$ | $6$ | $0.370$ | $0.370$ | $0.524$ | $7$ | $0.370$ | $0.370$ | $0.522$ | $8$ | $0.370$ | $0.370$ | $0.520$ | $9,10,11,\cdots$ | $0.370$ | $0.370$ | $\frac{1}{2}<P_{\texttt{Min}}^{3\rightarrow 1}<0.520$ Table 2: Minimum success probabilities in $3\rightarrow 1$ RAC task (assisted with two bits shared from a common source) by different consecutive pairs for different choices of $\lambda^{k}$, $\eta^{k}$ ($k\in\\{1,2,\cdots\\}$). In each of the above tables, the initially shared two-qubit state is assumed to be singlet state, $\rho_{AB}^{1}=|\psi^{-}\rangle\langle\psi^{-}|$. Here, all the numerical values are rounded to the third decimal places. ## Appendix E Proof of Corollary 2 and Corollary 3 Suppose Alicek-Bobk performs the aforementioned $2\rightarrow 1$ RAC task with probability $p_{k}$ and the aforementioned $3\rightarrow 1$ RAC with probability $1-p_{k}$ (with $0\leq p_{k}\leq 1$) using the following Bell- diagonal two-qubit state, $\rho^{k}_{AB}=\frac{1}{4}(\mathbf{I}_{4}+\sum_{i=1}^{3}t^{k}_{ii}\,\sigma_{i}\otimes\sigma_{i}),$ (34) where $(t^{k}_{uu})^{2}\geq(t^{k}_{vv})^{2}\geq(t^{k}_{ww})^{2}>0$ for an arbitrary choice of $u\neq v\neq w\in\\{1,2,3\\}$. This implies that $\min\left[(t^{k}_{11})^{2},(t^{k}_{22})^{2}\right]\neq 0$ and $\min\left[(t^{k}_{11})^{2},(t^{k}_{22})^{2},(t^{k}_{33})^{2}\right]\neq 0$. Hence, while performing the $2\rightarrow 1$ RAC, this state will give $P_{\texttt{Min}}^{2\rightarrow 1}>\frac{1}{2}$. On the other hand, while performing the $3\rightarrow 1$ RAC, this state will give $P_{\texttt{Min}}^{3\rightarrow 1}>\frac{1}{2}$. Hence, overall, Alicek-Bobk will achieve quantum advantage with this state. Next, the average post-measurement state $\rho_{AB}^{k+1}$ received by Alicek+1-Bobk+1 from Alicek-Bobk is given by, $\rho^{k+1}_{AB}=p_{k}\,\rho^{k+1}_{AB_{2\rightarrow 1}}+(1-p_{k})\,\rho^{k+1}_{AB_{3\rightarrow 1}},$ (35) where the average is taken since each pair act independently of other pairs. In the above equation, $\rho^{k+1}_{AB_{2\rightarrow 1}}$ is the average post- measurement state received by Alicek+1-Bobk+1 when Alicek-Bobk performs the aforementioned $2\rightarrow 1$ RAC with certainty. One can evaluate $\rho^{k+1}_{AB_{2\rightarrow 1}}$ using Eq.(21). On the other hand, $\rho^{k+1}_{AB_{3\rightarrow 1}}$ is the average post-measurement state received by Alicek+1-Bobk+1 when Alicek-Bobk performs the aforementioned $3\rightarrow 1$ RAC with certainty. It can be evaluated using Eq.(28). Hence, we can infer that the state $\rho_{AB}^{k+1}$ has the following Bell-diagonal form, $\rho^{k+1}_{AB}=\frac{1}{4}(\mathbf{I}_{4}+\sum_{i=1}^{3}t^{k+1}_{ii}\,\sigma_{i}\otimes\sigma_{i}),$ (36) where $t^{k+1}_{ii}$ for $i=1,2,3$ can be calculated using Eqs.(25), (32), (35). Hence, we have $\displaystyle t^{k+1}_{11}=$ $\displaystyle p_{k}\frac{t^{k}_{11}\,\left[1+\sqrt{1-(\eta^{k})^{2}}\right]\left[(t^{k}_{11})^{2}+(t^{k}_{22})^{2}\sqrt{1-(\lambda^{k})^{2}}\right]}{2\left[(t^{k}_{11})^{2}+(t^{k}_{22})^{2}\right]}$ $\displaystyle\quad+(1-p_{k})\frac{t^{k}_{11}\,\left[1+2\sqrt{1-(\eta^{k})^{2}}\right]\left[(t^{k}_{11})^{2}+\left[(t^{k}_{22})^{2}+(t^{k}_{33})^{2}\right]\sqrt{1-(\lambda^{k})^{2}}\right]}{3\left[(t^{k}_{11})^{2}+(t^{k}_{22})^{2}+t^{k}_{33})^{2}\right]},$ (37) $\displaystyle t^{k+1}_{22}=$ $\displaystyle p_{k}\frac{t^{k}_{22}\,\left[1+\sqrt{1-(\eta^{k})^{2}}\right]\left[(t^{k}_{22})^{2}+(t^{k}_{11})^{2}\sqrt{1-(\lambda^{k})^{2}}\right]}{2\left[(t^{k}_{11})^{2}+(t^{k}_{22})^{2}\right]}$ $\displaystyle\quad+(1-p_{k})\frac{t^{k}_{22}\,\left[1+2\sqrt{1-(\eta^{k})^{2}}\right]\left[(t^{k}_{22})^{2}+\left[(t^{k}_{33})^{2}+(t^{k}_{11})^{2}\right]\sqrt{1-(\lambda^{k})^{2}}\right]}{3\left[(t^{k}_{11})^{2}+(t^{k}_{22})^{2}+t^{k}_{33})^{2}\right]},$ (38) $\displaystyle t^{k+1}_{33}=$ $\displaystyle p_{k}t^{k}_{33}\,\sqrt{1-(\eta^{k})^{2}}\sqrt{1-(\lambda^{k})^{2}}+(1-p_{k})\frac{t^{k}_{33}\,\left[1+2\sqrt{1-(\eta^{k})^{2}}\right]\left[(t^{k}_{33})^{2}+\left[(t^{k}_{11})^{2}+(t^{k}_{22})^{2}\right]\sqrt{1-(\lambda^{k})^{2}}\right]}{3\left[(t^{k}_{11})^{2}+(t^{k}_{22})^{2}+t^{k}_{33})^{2}\right]}.$ (39) From the above equations, it is observed that when $p_{k}\in[0,1)$, then $t_{11}^{k+1}\neq 0$, $t_{22}^{k+1}\neq 0$, $t_{33}^{k+1}\neq 0$ for all possible values of $\lambda^{k}$ $\in$ $(0,1]$ and $\eta^{k}$ $\in$ $(0,1]$. Hence, in this case, Alicek+1-Bobk+1 will get quantum advantage in both $2\rightarrow 1$ RAC and $3\rightarrow 1$ RAC. Now, suppose that Alicek+1-Bobk+1 performs $2\rightarrow 1$ RAC task with probability $p_{k+1}$ and $3\rightarrow 1$ RAC with probability $1-p_{k+1}$ (with $0\leq p_{k+1}\leq 1$). Following the aforementioned argument, we can, therefore, conclude that Alicek+1-Bobk+1 will also get quantum advantage overall. Next, consider that $p_{k}=1$. In this case, $t_{11}^{k+1}\neq 0$, $t_{22}^{k+1}\neq 0$, for all possible values of $\lambda^{k}$ $\in$ $(0,1]$ and $\eta^{k}$ $\in$ $(0,1]$. But $t_{33}^{k+1}\neq 0$ only for $\lambda^{k}$ $\in$ $(0,1)$ and $\eta^{k}$ $\in$ $(0,1)$; $t_{33}^{k+1}=0$ if $\lambda^{k}=1$ and $\eta^{k}=1$. Subsequently, Alicek+1-Bobk+1 will not get quantum advantage while performing $3\rightarrow 1$ RAC if Alicek-Bobk performs the $2\rightarrow 1$ RAC with certainty (i.e., $p_{k}=1$) under projective measurements. Hence, in order to ensure overall quantum advantage by Alicek+1-Bobk+1 for all values of $p_{k+1}\in[0,1]$, we need the following: $\lambda^{k}\neq 1$ and $\eta^{k}\neq 1$. To summarize, if both the pairs- Alicek-Bobk and Alicek+1-Bobk+1 perform $2\rightarrow 1$ RAC task with some non-zero probability and $3\rightarrow 1$ RAC with some non-zero probability, then both the pairs will get overall quantum advantage for all values of $\lambda^{k}$ $\in$ $(0,1]$ and $\eta^{k}$ $\in$ $(0,1]$. Applying this argument to arbitrary number of pairs with $p_{k}$ being equal to or not equal to $p_{\tilde{k}}$ for all choices of $k\neq\tilde{k}\in\\{1,2,\cdots\\}$, we get Corollary 2. On the other hand, if Alicek-Bobk performs $2\rightarrow 1$ RAC task with certainty, then we have $p_{k}=1$. In this case, if Alicek+1-Bobk+1 performs $3\rightarrow 1$ RAC with certainty, then this pair will not get quantum advantage if $\lambda^{k}=1$ and $\eta^{k}=1$. However, for all values of $\lambda^{k}$ $\in$ $(0,1)$ and $\eta^{k}$ $\in$ $(0,1)$, Alicek+1-Bobk+1 will get quantum advantage. Applying this argument to arbitrary number of pairs, we get Corollary 3.
8k
arxiv_papers
2101.01228
# Reddit Entity Linking Dataset Nicholas Botzer University of Notre Dame, Notre Dame, IN Yifan Ding University of Notre Dame, Notre Dame, IN Tim Weninger University of Notre Dame, Notre Dame, IN Corresponding author: Tim Weninger, [email protected] ###### Abstract We introduce and make publicly available an entity linking dataset from Reddit that contains 17,316 linked entities, each annotated by three human annotators and then grouped into Gold, Silver, and Bronze to indicate inter-annotator agreement. We analyze the different errors and disagreements made by annotators and suggest three types of corrections to the raw data. Finally, we tested existing entity linking models that are trained and tuned on text from non-social media datasets. We find that, although these existing entity linking models perform very well on their original datasets, they perform poorly on this social media dataset. We also show that the majority of these errors can be attributed to poor performance on the mention detection subtask. These results indicate the need for better entity linking models that can be applied to the enormous amount of social media text. Keywords – entity linking, dataset, natural language processing ## 1 Introduction Entity Linking is the problem of mapping free text with the appropriate entities in a structured knowledge base (KB), such as Wikipedia. Linking natural language text to large knowledge graphs enables applications to make use of rich semantic relationships that may be implied in the natural language, but are explicitly expressed in the knowledge graph. Entity linking is therefore a core task in natural language processing with applications in recommender systems [1], chatbots [2], relation extraction [3], and knowledge base completion [4, 5] among many others. The entity linking task is generally broken down into subtasks: (1) mention detection and (2) entity disambiguation. Similar to named entity recognition (NER), the mention detection subtask finds substrings from text that represent some entity. Mention detection is itself a wide area of research encompassing semantic role labelling [6], anaphora resolution [7], and others in order to consider the various ways that humans express entities in natural language. The entity disambiguation task takes the word or phrase identified from the mention detection task and then identifies an entry in a knowledge base that matches the entity. Booth killed Lincoln at Ford’s Theater.John_Wilkes_BoothFord’s_TheatreAbraham_Lincoln Figure 1: Example entity linking task. Highlights indicate three entity mentions from the surface text, blue boxes denote the linked entity corresponding to each mention. For example, the illustration in Fig. 1 shows the phrase “Booth killed Lincoln at Ford’s Theater.”, which has three entity mentions highlighted in yellow: ‘Booth’, ‘Lincoln’, and ‘Ford’s Theater’. These mentions alone are useful for many downstream tasks, but entity disambiguation goes one step further and reconciles the entities with their specific Wikipedia entries (or some other knowledge base) such that the surface text is linked to John_Wilkes_Booth, Abraham_Lincoln, and Ford’s_Theatre respectively. Having linked the surface text with their entities, any number of additional steps can be taken to refine or reason about the data [8]. Because of its wide applicability, entity linking has been heavily explored, and many datasets are available for model training and testing. However, most of these entity linking datasets are extracted from well written news articles, websites, or otherwise professionally curated corpora. Although useful for many tasks, the lexical and linguistic styles of commonly available datasets do not match what is commonly expressed on social media platforms. As a result, recent work has found that many entity linking models do not perform well on social media text [9]. Besides the lack of training data, entity linking from social media is particularly challenging because of the presence of jargon, poor grammar usage, inconsistent lexical formatting and variation, the wide usage of anaphora, and frequent use of idioms, euphemisms, and other creative (or lazy) use of language. In addition, the use of threading systems in many social discourse platforms adds another layer of complexity into natural language processing systems. Some efforts have been made to reconcile this disparity. Typically these systems perform either the mention detection task [10] or the entity disambiguation task [11, 12, 13, 14], but rarely both. More recently, the rise of deep neural architectures has enabled end-to-end models that learn both tasks jointly [15, 16]. When considering social media data, most related work use Ritter et al’s entity linking dataset from Twitter [17], which contains hand annotated entity matches for 2,400 Tweets as well as the non-social media datasets Figer [18] and OntoNotes [19] from NIST TAC’s entity discovery and linking tasks. Additional hand annotated datasets have been collected from Twitter using various methodologies [20, 21]. Figure 2: The annotation interface used by human annotators. The interface allowed them to highlight arbitrary spans of text and link them with valid Wikipedia pages. Although these datasets and algorithms have greatly progressed entity linking, it is important to admit that most social and digital text is not created in Tweet-form. Rather, the vast amount of online communication appears in modern bulletin boards, comment threads, and other user-generated discussion formats. In the present work we consider the social media site Reddit. Reddit is an archetypal example of a socio-technical discussion board with hundreds of millions of posts and billions of individual comments. On Reddit each post is submitted to a specific subreddit and a discussion is attached to each post. User-provided upvotes and downvotes govern the visibility and ranking of posts and comments on the site. Comments are threaded, meaning that each comment must have a single parent-comment; top-level (i.e., root) comments are considered children of the post text and link, which usually sets the topic of discussion. Reddit consists of thousands of distinct communities called subreddits. Each subreddit is associated with a specific topic, such as r/news or r/pokemon, and a volunteer team of moderators that create and enforce guidelines for each subreddit. Posts made within each subreddit must adhere to these guidelines or offending users may be banned from future posting or commenting in the subreddit. Due to the more relaxed moderation rules on the platform, especially surrounding controversial topics, Reddit threads commonly contain toxic, harassing, or explicit commentary [22]. A distinct difference that separates Reddit from Twitter or Facebook is how communities form. On Reddit, users seek out and find subreddits that interest them, whereas on Twitter, Facebook and other social networks, users build friendship networks and then share posts within this network [23]. Because of its breadth and wide availability, researchers have begun to use Reddit and similar socio-technical discussion boards like HackerNews, Slack, Discord, Disqus for a variety of tasks. The tasks generally tackle specific subreddits and the community dynamics that arise within them. A significant portion of this research has analyzed subreddits and user behaviors surrounding mental health disorders, which has taken the approach of extracting text and performing clustering to find different themes within these subreddits [24, 25]. In another study, the subreddit r/SuicideWatch was analyzed to help determine whether a user may be at risk of committing suicide [26]. The resulting data from this allowed researchers to create models to identify users that are at risk in the future. Reddit has also proven to be a good source for the collection of datasets. One work in particular utilizes posts from ten subreddits and then has them annotated using Mechanical Turk to determine whether the poster’s behavior exhibits stress or not [27]. In some cases, the representative labels for a task are built right into Reddit. For example, the common occurrence of sarcasm labels on Reddit can be used to create a large corpus for sarcasm classification [28]. Furthermore the threaded nature of Reddit posts has allowed researchers to explore how persuasive arguments are carried out [29]. By analyzing the arguments of the r/Changemyview subreddit researchers were able to capture the argumentative sentences and identify important persuasive discourse markers. Generally speaking, social media analysts are beginning to develop and employ text analysis tools to understand a wide variety of social and interpersonal issues such as gender differences [30], weight loss trends [31], and the spread of technological innovations [32]. However, the lexical and linguistic style used in online discussion boards is not only different from professional journalistic styles used to curate large natural language processing models and datasets. Some datasets have recently become available for microblogs like Twitter [33, 34], but style of microblogs are vastly different from discussion boards. Despite the large amount of social data contributed to these platforms there exists no large, labeled dataset from these types of threaded discussion boards. ### 1.1 Related Works Within the scope of natural language processing, the problem of entity linking has been quite extensively studied and a vast amount of papers exist discussing the subject; Shen et al. [35] and Sevgili et al. [36] present good surveys of the field. We also highlight two other works that have previously performed crowd sourced entity linking, [9, 49] and compare our method against theirs in Section 3. One of the issues for researchers working in this domain is the lack of training and testing data for social media, as well as focus on only narrow portions of the overall problem, e.g., news articles and Wikipedia. In the present work, we review some of the recent entity linking techniques and recommend that researchers focus future work on social discussion boards like Reddit and produce holistic end-to-end models rather than piecemeal approaches to entity linking. Existing models have major problems with not being able to label entities if they were not included in the initial training set. The concept of “zero-shot” learning was developed to help solve this problem [37, 5]. One major benefit of zero-shot learning is that the model can correctly map entities without having trained on any mentions for it. Others have sought to exploit the type systems of Wikipedia to improve the disambiguation of models by understanding the different contexts for a given mention [38, 39]. By using the categories of Wikipedia as entity-types, the models can disambiguate the word correctly by using the context of surrounding words. Another successful approach uses reinforcement learning and a global sequence perspective to select entities for disambiguation [40, 41]. These models are able to move beyond the local context of the type models and use all of the previously predicted entities to improve performance. Previous works have attempted to tackle the problem on Twitter. Early work on social entity linking has focused on the entity disambiguation task for tweets by utilizing a graph based framework that captures user’s topics of interest [42]. Other models have been developed to perform end-to-end entity linking on Twitter. Guo et al. [43] approach the problem by using a structural SVM that jointly optimizes the mention detection and entity disambiguation task together. Similarly, Fang and Chang [44] create an end-to-end spatiotemporal model to extract information from tweets. Unfortunately, the data used in these three papers is not made publicly available. Finally, researchers have been able to employ deep language models, like BERT, for entity linking [45]. In one study, BERT was able to achieve good performance on the end-to-end entity linking task with only small changes [46]. In another study, Yamada et al. [47] trained BERT on a new masked entity prediction task. By using a new pre-training task they are able to embed contextualized entities within BERT allowing for superior performance on a variety of entity based tasks. Yet, these benchmark datasets are almost entirely based on journalistic quality news text, which are vastly different than the enormous amount of text that is posted to social bulletin boards. ### 1.2 Research Questions To help alleviate the deficit of training data from social bulletin boards, we introduce and make available a dataset of 17,316 hand-annotated entities from 619 Reddit posts and a sample of comments from each of their comment threads. For each post we asked three annotators from Amazon’s Mechanical Turk to: (1) hand-label any and all entity mentions and (2) link each mention to its representative entry on Wikipedia. Specifically, we ask the following research questions: 1. RQ1 : How well do human annotators agree on entity labels? 2. RQ2 : Do existing state of the art entity linking models perform well on the social discussion dataset? 3. RQ3 : Which parts of end-to-end entity linking models are most responsible for errors in entity linking on social text? We evaluate the agreement rate of different entities and annotators and make simple corrections within the raw data. Next, we evaluate several popular entity linking models and algorithms on this new dataset. Despite our various attempts, we observed that existing models did not outperform simple baselines on this new data. These results indicate that much work remains in the entity linking task. It is our goal that this dataset spurs further development of entity linking systems especially on the vast amount of non-Tweet styled social discussion data. Table 1: Subreddits from which post and comments were selected for annotation. Subreddit | | Posts | Comments ---|---|---|--- r/movies | | 67 | 143 r/worldnews | | 158 | 315 r/gaming | | 30 | 63 r/news | | 84 | 163 r/politics | | 71 | 145 r/explainlikeimfive | | 49 | 90 r/sports | | 72 | 148 r/science | | 47 | 91 r/Economics | | 41 | 85 Total | | 619 | 1,243 ## 2 Collection Methodology The dataset introduced in this paper was collected from a variety of posts from various subreddits. Table 1 shows the subreddits from which posts were collected as well as the number of comments. The subreddits were selected based on two criteria, the first being the popularity of the subreddit. The second criteria was subreddits that were likely to include a broad variety of entities. For each post we selected the top-scoring comment (upvotes minus downvotes) and, when available, the top-scoring child and grand-child comment. Due to the nature of choosing popular subreddits and using high scoring posts our dataset has an emphasize on high quality content selected for annotation. Collection was limited to only those posts submitted to Reddit between Jan. 2018 and Aug. 2019. On average, the mean length of a post in our dataset is 20 tokens and the mean length of a comment is 34 tokens. The shortest post was 4 tokens and the longest was 66 tokens. The shortest comment was one token with the longest comment being 207 tokens. Because we are providing this data to human annotators, we also removed a small number of posts that had explicit content. In total we collected 619 posts and 1,243 comments from 9 subreddits. ### 2.1 Human Annotation To collect entity mentions and their matches from Reddit posts and comments, we created a straightforward web annotation framework and provided it as a human intelligence task (HIT) to annotators from Mechanical Turk. We limited the HIT to include only American annotators because the posts and comments were English and primarily focused on American current events and culture. This task was reviewed and approved by the University of Notre Dame’s Internal Review Board (#20-01-5751). Only after informed consent confirmation was obtained, the annotator was given the following instructions: 1. | Highlight each portion of text that you believe matches with a Wikipedia article and left click to open a text box. ---|--- 2. | Determine the correct Wikipedia page for the text based on the context of the discussion thread and copy the Wikipedia link into the Wikipedia URL section. Verify the link entered is a valid Wikipedia URL by clicking the “Verify Wikipedia Link” button. 3. | A green checkmark will appear to the right if the URL is valid. You may close the box with the “Done” button after. ---|--- 4. | Once you have highlighted and linked all pieces of text that you feel match with a Wikipedia article please advance to the next page. Next, the annotation system asked the annotator to complete a short (and obvious) practice case. Any incorrect or missing annotations were provided to the annotator as feedback. Because the instructions did not explicitly or rigorously define “entity”, individual interpretations of what is and is not an entity is an important element in our dataset. It has been shown in prior work that the consensus of what constitutes an entity is not concrete, even for researchers within the field [48]. However, the practice task certainly framed our goal. The practice post was “Iran fires missile. United States airbase struck.”, where the four highlighted entities here indicate the expected entity matches. After the practice task was complete the annotators were asked to annotate 10 posts and their related comments one at a time. A screen capture of the annotation system is illustrated in Fig. 2. To make an annotation, an annotator selected a substring of text from the post title or comment. Immediately after text is selected a popup-style box would appear with the highlighted text and a prompt to enter the Wikipedia link. Presumably, the annotator would search Wikipedia for a proper entity link and provide it to the system. To ensure consistency, the link had to be verified to resolve with a valid HTTP response from Wikipedia before the annotation could be submitted. Each post and comment thread was annotated by three different annotators as was done in two other crowd sourcing studies [9, 49]. By asking the same user to detect entity mentions and perform the linking we expect annotations to vary among the annotators. This task is different from other crowd sourcing entity linking methodologies, which pre-selected the entity mentions [9] or only asked for entity-links from mentions previously found by other users [49]. Our methodology also differs in that we use no expert annotators to adjudicate disagreements in the annotations. Each annotator was paid $4.00 USD for their effort. Upon completion we briefly inspected each annotators’ submissions to ensure that a reasonable effort was made. Overall we received 17,316 annotations from 202 annotators not including the submissions of 28 annotators that were rejected. In all but the most egregious cases, rejected annotators were still paid. In a small number of instances, annotators did not complete the full set of 10 annotations. We have records of their annotations, but because we cannot verify the quality of these incomplete tasks, we do not include them in our analysis or the released dataset. In total we spent $908.00 USD for 227 annotators. Table 2: Number of annotated entities from posts and comments. Inter-annotator agreement produces groups of Gold, Silver, and Bronze annotation levels. | Group | Raw | Unique | Unique ---|---|---|---|--- | Entities | Entities | Mentions Posts | Gold | 704 | 527 | 582 Silver | 1,159 | 900 | 989 Bronze | 2,557 | 1,877 | 1,938 Comnts | Gold | 638 | 487 | 527 Silver | 1,564 | 1,126 | 1,244 Bronze | 4,481 | 2,788 | 3,3013 ### 2.2 Annotation Results The collected results were aggregated to determine the agreement among the three annotators. Gold annotations are those entities identified and linked by all three annotators. For now, Gold annotations must agree on the exact text selection (including punctuation, whitespace, etc) and link to the same Wikipedia page (without considering redirection-pages, disambiguation-pages, etc). Because gold annotations demonstrate unanimous agreement, we consider them to be high quality. Similarly, silver annotations are those entities selected by two of the three annotators. Although these annotations do not show unanimous agreement, they generally remain high quality and are useful for evaluation and training purposes. Finally, bronze annotations are those entities indicated by a single annotator. Due to the lack of inter-annotator agreement, bronze annotations are the lowest quality, but still contain interesting occurrences. For example, in the post title beginning with “Box Office Week: Black Panther smashes at #1 with $201M…”, all three annotators agreed that the text “Black Panther” was an entity mention, but they linked the mention to three different Wikipedia pages: the film, the comic book, and the animal. Raw results of annotation for both posts and comments are displayed in Table 2. Note that the number of gold entities is not exactly the same as the number of unique entities and unique mentions. This is because the exact same entity (i.e., Wikipedia page) and mention is sometimes found in two different post titles. Likewise, different surface forms often link to the same entity (c.f., entity resolution), and the same surface form sometimes links to the different entities (c.f., entity disambiguation). ### 2.3 Cleaning Because it is important to consider how small annotation disagreements and different interpretations permeate this dataset, we performed a critical analysis of annotator disagreement and applied simple procedures to clean certain differences from the dataset. #### 2.3.1 Redirection Cleaning Our first cleaning step was to reconcile Wikipedia redirection pages. For example, the entity mention “Donald Trump” is frequently linked to the Wikipedia page entitled Trump_(president), but this Wiki page automatically redirects to Donald_Trump. From the annotator’s point of view, these two Wiki pages are identical in all ways except for the link. Yet these two different links are considered disagreements in our raw dataset. So, we found all redirection links provided by the annotators and reconciled them to the same Wiki page. This simple cleaning task resulted in an additional 59 gold annotations and an additional 66 silver annotations. After this simple cleaning, we performed a more-in-depth analysis. We observed two types of errors: link disagreement and mention disagreement. #### 2.3.2 Link Disagreement Link disagreement occurs when two or more annotators highlight the exact same surface text, but link the text to different entities. We sought to understand this disagreement more thoroughly. Specifically, we asked: When annotators disagree on an entity, are their choices close to one another? That is, do they link to two entities which are similar? Or do their ideas over what the surface form represents diverge significantly? To answer this question we used the Wikipedia Link Measure (WLM) to measure the similarity between two different entity annotations that have identical surface forms. The WLM score uses the internal link structure of Wikipedia to measure the similarity between two Wiki pages based on how many incoming links the pages share. Formally, given two entity pages $a$ and $b$ as well as their links within the Wikipedia graph $W$, WLM is defined as: $\textrm{WLM}(a,b)=1-\frac{\log(\max(|A|,|B|))-\log(|A\cap B|)}{\log(|W|)-\log(\min(|A|,|B|))},$ $0.0$$0.1$$0.2$$0.3$$0.4$$0.5$$0.6$$0.7$$0.8$$0.9$$0$$0.1$$0.2$$0.3$Wikipedia Link Measure$p(x)$Link DisagreementRandom Figure 3: Wikipedia Link Measure comparing annotations with Link Disagreements with null model. Despite linking to different pages for the same mention, these Link Disagreements are significantly more similar than random Wikipedia pairs (Mann Whitney $\tau=0.16$, $p<0.001$). Table 3: Top 10 entities from the gold and silver annotations of the dataset. For each entity a list of the mention texts that they were annotated with are shown. Entity | # | Unique Mentions ---|---|--- Donald_Trump | 88 | President Trump, Trump, trump, President, trumpian, Donald Trump, Tump, US President Donald Trump, Trump’s, 45, his United_States | 53 | U.S., US, American, USA, U.S, America’s, United States, America Film | 36 | Films, films, Movie, movie, FILMS, mocie, movies, Movies, Film, film China | 30 | china, Chinese, China Russia | 26 | Russia, Russian, Russians Human | 18 | human race, Humans, human being, human, Humanity, humans Republican_Party_ (United_States) | 16 | Republicans, GOP, red, Republican Party, Republican Germany | 15 | Germany, germany Research | 14 | Researchers, research, researcher, Research, researchers Scientist | 14 | scientists, Scientists, scientist where $A$ and $B$ are the sets of incoming links to $a$ and $b$ respectively [50]. Simply put, WLM uses the Wikipedia network to measure the semantic similarity of two entities. We compare the mismatched annotations against a null model consisting of random pairings of entities with the same number of incoming links. Figure 3 illustrates the WLM distribution of mismatched entities against the null model. A Mann-Whitney Test of two distributions determined that the mismatched annotations are significantly more similar than the null model ($\rho=0.16$, $p<0.001$). These results indicate that, although annotators may not explicitly agree on the linked entity, their annotations are much more similar than if they chose a random entity. Although we observe that these disagreements link to similar Wikipedia entities, we have no way of reconciling these differences. Therefore we cannot apply any changes to the raw data that correct link disagreements. Table 4: Redirection correction (Redir.), mention correction (Men.), and both corrections (All) made to the raw annotation data increase the annotator agreement. | | Raw | Redir. | Men. | All ---|---|---|---|---|--- Posts | Gold | 662 | 696 | 670 | 704 Silver | 1,152 | 1,154 | 1,159 | 1,159 Bronze | 2,699 | 2,594 | 2,661 | 2,557 Comnts | Gold | 591 | 616 | 613 | 638 Silver | 1,544 | 1,552 | 1,559 | 1,564 Bronze | 4,662 | 4,573 | 4,566 | 4,481 #### 2.3.3 Mention Disagreement Mention Disagreement occurs when two or more annotators agree on the entity link but disagree on the surface form of the entity mention. Usually, these disagreements differ by only a character or two. In other instances, mentions disagree on whether to include the plural or possessive endings of an entity name. We reconcile these differences by expanding all annotations to end at word boundaries, e.g., white space or punctuation. Redirection cleaning and mention disagreement correction were applied to each set of annotations. The results displayed in Table 4 show how the numbers for each annotation level change as Redirection Cleaning (Redir.), and Mention Disagreement correction (Men.) were performed individually. The final column (All) displays the final results with all corrections applied collectively. #### 2.3.4 Other Disagreements Another interesting aspect of this dataset are the details of how annotators selected certain mentions and entities. Consider, for example, an actual disagreement among three annotators illustrated in Fig. 4. In this instance, the four bronze annotations (two from annotator 1, and one each from annotators 2 and 3) show a mix of link disagreement and mention disagreement. Despite their similarity, none of these annotations can be reconciled using the above corrections in part or in aggregate. In annotator 1’s case there is no reference to the year 2010 in the text, showing that some annotators may have selected an entity that is somewhat similar. This example shows how difficult finding annotation agreement among human annotators can be. Despite this difficulty, this dataset shows a surprising amount of agreement, and might prove useful to better understanding annotator perception and biases. $\ldots$finance chair who paid 1M to a playboy playmate wasn’t for$\ldots$Annotator 1PlayboyList_of_Playboy_Playmates_of_2010$\ldots$finance chair who paid 1M to a playboy playmate wasn’t for$\ldots$Annotator 2Playboy_playmate$\ldots$finance chair who paid 1M to a playboy playmate wasn’t for$\ldots$Annotator 3Playboy_playmate Figure 4: Instance where three annotators provided different mention-entity annotations. Annotations, in this case, are extremely close but are not in agreement. Highlights indicate entity mention, blue boxes denote the linked entity from the example text Table 5: Results on the five other non-social media datasets for entity disambiguation (ED) and End-to-End Methods (E2E EL). Results are reported for Precision (P), Recall (R), and F1 score. | Model | MSN | | AQNT | | ClueW | | WIKI | | ACE ---|---|---|---|---|---|---|---|---|---|--- | P | R | F1 | | P | R | F1 | | P | R | F1 | | P | R | F1 | | P | R | F1 ED | Prior | 0.76 | 0.76 | 0.76 | | 0.86 | 0.83 | 0.84 | | 0.67 | 0.67 | 0.67 | | 0.64 | 0.64 | 0.64 | | 0.90 | 0.84 | 0.87 Query | 0.64 | 0.64 | 0.64 | | 0.82 | 0.82 | 0.82 | | 0.56 | 0.56 | 0.56 | | 0.57 | 0.57 | 0.57 | | 0.70 | 0.70 | 0.70 deep-ed | 0.92 | 0.92 | 0.92 | | 0.90 | 0.87 | 0.89 | | 0.76 | 0.76 | 0.76 | | 0.74 | 0.74 | 0.74 | | 0.90 | 0.84 | 0.87 End-to-End | 0.94 | 0.90 | 0.92 | | 0.92 | 0.87 | 0.90 | | 0.83 | 0.72 | 0.77 | | 0.78 | 0.71 | 0.74 | | 0.93 | 0.84 | 0.88 mulrel-nel | 0.93 | 0.93 | 0.93 | | 0.88 | 0.85 | 0.87 | | 0.77 | 0.77 | 0.77 | | 0.77 | 0.77 | 0.77 | | 0.91 | 0.85 | 0.88 wnel | 0.93 | 0.92 | 0.92 | | 0.92 | 0.89 | 0.91 | | 0.77 | 0.77 | 0.77 | | 0.75 | 0.75 | 0.75 | | 0.90 | 0.84 | 0.87 E2E EL | NER + Prior | 0.24 | 0.47 | 0.32 | | 0.16 | 0.34 | 0.22 | | 0.10 | 0.38 | 0.16 | | 0.10 | 0.28 | 0.15 | | 0.08 | 0.84 | 0.14 NER + Query | 0.24 | 0.47 | 0.32 | | 0.16 | 0.34 | 0.22 | | 0.10 | 0.38 | 0.16 | | 0.10 | 0.28 | 0.15 | | 0.05 | 0.55 | 0.09 End-to-End | 0.75 | 0.73 | 0.74 | | 0.34 | 0.39 | 0.37 | | 0.44 | 0.48 | 0.46 | | 0.39 | 0.40 | 0.39 | | 0.19 | 0.68 | 0.30 REL | 0.77 | 0.67 | 0.72 | | 0.37 | 0.29 | 0.33 | | 0.56 | 0.27 | 0.36 | | 0.47 | 0.33 | 0.39 | | 0.58 | 0.17 | 0.26 #### 2.3.5 Top Entities and their Mentions Another way to view this data, as well as the difficulty of the entity linking task in general, is by considering the different mentions that link to the same entities. The entries in Table 3 show the top 10 most frequent entities linked in our dataset as well as some of the mentions that are found to link to them. Many entities, like United_States, have multiple variations in how they are mentioned in text, especially in social text. Other entities, like Film, are represented by many synonymous terms, as well as different capitalization, misspellings, and other variations in their surface forms. Although not the objective of the current work, the entity mentions and their links may be a valuable resource for social discourse analysis and other linguistic studies. ### 2.4 Data Availability The dataset containing the cleaned annotations presented in Table 4 is available at https://doi.org/10.5281/zenodo.3970806. The dataset contains the data of posts and comments gathered from Reddit. Metadata for each post contains the post id, the subreddit to which it was posted and the post’s title. Metadata for each comment contains the post id, comment id, subreddit, the parent id (which is itself a comment or post in the dataset), and the comment text. Post and comment ids can be used to gather additional data from Reddit directly. Annotations are stored for post and comments according to their agreement: Gold, Silver, or Bronze for a total of 6 annotation files. In addition to post or comment ids, annotations contains the mention text, the linked Wikipedia entity (after applying redirection correction), the start and end position of the mention from the mention text, and the corrected mention text. ## 3 Experiments Our second goal is to evaluate the performance of entity linking models on our new dataset. Because the bronze annotations fail to show any annotator agreement, we do not perform an evaluation on this part of the dataset. Instead, we limit our evaluation to the gold and silver annotations, which we consider to be of high quality due to their inter-annotator agreement. As discussed earlier, the entity linking task can be viewed as two separate consecutive tasks: (1) mention detection and (2) entity disambiguation. The first task is widely recognized as named entity recognition (NER) which is not considered individually in this paper. The second task itself has received much attention in recent years, with the introduction of deep neural network- based entity disambiguation models [11, 12, 13, 14, 51, 40]. However, only a handful of end-to-end models have been developed to perform both tasks jointly [16, 52]. Overall, we evaluate six different entity disambiguation models and four end- to-end models. ### 3.1 Entity Disambiguation Models We formally define the entity disambiguation task as follows. Given a document $D=\\{w_{1},w_{2},...w_{n}\\}$ containing $n$ tokens, a set of entity mentions from the document $M=\\{m_{1},...,m_{x}\\}$, and a knowledge base of entities $E=\\{e_{1},e_{2},...e_{m}\\}$, the goal of entity disambiguation is to find a mapping $\mu:M\mapsto E$ for each mention $m\in M$ to the appropriate entity $e\in E$ in the knowledge base. Table 6: Results on the AIDA-CoNNL datasets for entity disambiguation (ED) and end-to-end entity linking (E2E EL). Results are reported for Precision (P), Recall (R), and F1 score. | Model | AIDA-Train | | AIDA-A | | AIDA-B ---|---|---|---|---|---|--- | P | R | F1 | | P | R | F1 | | P | R | F1 ED | Prior | 0.75 | 0.75 | 0.75 | | 0.71 | 0.71 | 0.71 | | 0.69 | 0.69 | 0.69 Query | 0.63 | 0.63 | 0.63 | | 0.61 | 0.61 | 0.61 | | 0.60 | 0.60 | 0.60 deep-ed | 0.87 | 0.87 | 0.87 | | 0.87 | 0.87 | 0.87 | | 0.87 | 0.87 | 0.87 End-to-End | 0.97 | 0.96 | 0.97 | | 0.95 | 0.93 | 0.94 | | 0.89 | 0.85 | 0.87 mulrel-nel | 0.95 | 0.95 | 0.95 | | 0.92 | 0.92 | 0.92 | | 0.93 | 0.93 | 0.93 wnel | 0.86 | 0.86 | 0.86 | | 0.81 | 0.81 | 0.81 | | 0.81 | 0.81 | 0.81 E2E EL | NER + Prior | 0.42 | 0.74 | 0.53 | | 0.41 | 0.71 | 0.52 | | 0.40 | 0.69 | 0.51 NER + Query | 0.27 | 0.48 | 0.34 | | 0.27 | 0.46 | 0.34 | | 0.25 | 0.44 | 0.33 End-to-End | 0.96 | 0.96 | 0.96 | | 0.90 | 0.89 | 0.90 | | 0.84 | 0.81 | 0.82 REL | 0.84 | 0.73 | 0.78 | | 0.80 | 0.71 | 0.75 | | 0.80 | 0.71 | 0.75 Table 7: Results on our Reddit datasets for entity disambiguation (ED) and end-to-end entity linking (E2E EL). Results are reported for Precision (P), Recall (R), and F1 score. | Model | Gold | | Silver | | Comb. ---|---|---|---|---|---|--- | P | R | F1 | | P | R | F1 | | P | R | F1 ED | Prior | 0.81 | 0.78 | 0.79 | | 0.69 | 0.66 | 0.68 | | 0.73 | 0.70 | 0.72 Query | 0.89 | 0.89 | 0.89 | | 0.77 | 0.77 | 0.77 | | 0.81 | 0.81 | 0.81 deep-ed | 0.81 | 0.78 | 0.80 | | 0.73 | 0.70 | 0.71 | | 0.76 | 0.73 | 0.74 End-2-End | 0.93 | 0.51 | 0.66 | | 0.89 | 0.40 | 0.55 | | 0.90 | 0.47 | 0.62 mulrel-nel | 0.72 | 0.70 | 0.71 | | 0.63 | 0.61 | 0.62 | | 0.67 | 0.65 | 0.66 wnel | 0.87 | 0.83 | 0.85 | | 0.79 | 0.74 | 0.77 | | 0.83 | 0.78 | 0.80 E2E EL | NER + Prior | 0.13 | 0.29 | 0.18 | | 0.14 | 0.15 | 0.15 | | 0.28 | 0.19 | 0.23 NER + Query | 0.14 | 0.30 | 0.19 | | 0.14 | 0.15 | 0.15 | | 0.28 | 0.22 | 0.24 End-2-End | 0.22 | 0.21 | 0.22 | | 0.27 | 0.12 | 0.17 | | 0.49 | 0.15 | 0.23 REL | 0.20 | 0.35 | 0.25 | | 0.24 | 0.20 | 0.22 | | 0.44 | 0.25 | 0.32 We used six methods: two high-quality baselines and four popular entity disambiguation models. First, we evaluated entity disambiguation on existing non-social media datasets from AIDA-CoNLL (AIDA-train, AIDA-A, AIDA-B), MSNBC (MSN), AQUAINT (AQNT), ACE2004 (ACE), WNED-WIKI (WIKI), and ClueWeb (ClueW) datasets; as well as the All-corrected Gold, Silver, and Gold + Silver (Comb.) of the Reddit entity linking datasets after corrections were made (i.e., the All-column from Table 4). Simply put, given a dataset containing text and annotated mentions, the models try to predict the correct Wikipedia entities for each mention. The details of each model are as follows: * • Query: In this baseline method we use Wikipedia’s search API111https://pypi.org/project/wikipedia/ to query each mention. We used the top ranked result as the predicted Wikipedia entity. * • Prior: Another common baseline used in entity disambiguation models is the entity-mention prior dictionary $p(e|m)$ obtained from Wikipedia. For this baseline, we simply pick the entity that occurs most frequently for a for each mention. * • deep-ed [11]: Uses a combination of entity embeddings, entity-mention prior dictionary, and a contextual attention mechanism to create the local model. Entity embeddings are obtained by considering word-entity co-occurrence. The final global model considers the disambiguation task as a sequential decision problem solved using loopy belief propagation (LBP) with linear conditional random field (CRF). Following the instructions of the model’s authors, we retrained deep-ed by adding the mentions of our Reddit dataset into the training set. * • mulrel-nel [13]: Encodes pairwise relations between mentions as latent variables. It also adds dummy mentions to reduce noisy relations. * • wnel [14]: Previous entity disambiguation models rely heavily on the AIDA- train dataset for training. Here the authors introduce a linker model trained on AIDA-train and another unlabeled dataset with generated pseudo labels. * • End-2-End [16]: Similar to deep-ed, the End-2-End model generates an entity embedding by considering word-entity co-occurrence in Wikipedia. It first applies an LSTM to build word embeddings by combining word2vec and character embeddings. Then, a global voting algorithm is used by combining word-entity correlation, entity-entity pairwise correlation and a mention-entity prior dictionary. Although the end-to-end model can be used to perform both the mention detection and entity disambiguation tasks, in these first experiments we use only the entity disambiguation portion of the model. All of the non-baseline models, mulrel-nel, wnel, deep-ed, End-2-End, were trained on the training set (AI-Trn) of AIDA-CoNLL dataset [53]. AIDA-CoNLL also contains validation (AI-A), and test (AI-B) sets. Following the advice of the authors, we retrained the deep-ed and End-2-End models and expanded the entity sets using the mentions of our dataset. If we did not do this then the unseen mentions and entities from our Reddit dataset would not be available to deep-ed and End-2-End to disambiguate. Unfortunately, this entity expansion results in a slight performance decrease on the original, non-social media, datasets. We measure the micro-precision, micro-recall, and micro-$F_{1}$ scores on the predicted entities compared to the ground truth. For the ED task, precision is simply the number of correct predictions out of all predictions made. Recall is measured as the number of correct predictions out of the total number of ground truth entities. It is important to note that most models will not make a prediction for a mention if the model can not find corresponding entity in the knowledge base. We report the results of the models trained non-social datasets in Table 5; the results of the AIDA-CONLL dataset are reported in Table 6; and results of the various models trained on the new Reddit dataset is reported in Table 7. Results on the Reddit dataset specifically describe the performance of the models on the All-corrected Gold and Silver annotations presented in the current work, as well as the union of these two groups (Comb.). Two important conclusions can be reached from these results. First, the simple Query baseline model works surprisingly well on the Reddit annotation dataset, but this performance decreases as Silver annotations are included in the dataset. Second, the deep neural network models do not substantially outperform the baseline models on our Reddit dataset. The Query baseline we used achieves the highest $F_{1}$ score of all methods on our combined dataset – even higher than the best neural model. An interesting effect can also be seen in the precision of the neural End-2-End model: it achieves the highest precision of all methods but also the lowest recall. This indicates that the model is likely missing many of the entities that are used in our Reddit dataset. In sum, these results show that existing models trained and tuned with typical entity annotations simply do not transfer to Reddit data. ### 3.2 End-to-end Entity Linking Models Our next goal is to evaluate existing end-to-end models on the overall entity linking task, which is a combination of the mention detection and entity disambiguation tasks. There has been much less work on the end-to-end task compared to the entity disambiguation task, and there are very few systems with available source code. Despite the dearth of models, the end-to-end task is the more realistic scenario because entity mentions are difficult and costly to annotate in text data. In addition, the end to end task is typically more difficult because errors can accumulate through the end-to-end pipeline [54]. We formally define the end-to-end task as follows. Given a document $D=\langle w_{1},w_{2},...,w_{n}\rangle$ containing a sequence of $n$ tokens, and a knowledge base of entities $E=\\{e_{1},e_{2},...,e_{m}\\}$. The goal of end- to-end entity linking is to find a list of mention entity pairs $L={(m_{1},e_{1}),...,(m_{k},e_{k})}$ where each mention $m_{k}=\langle w_{a},...,w_{b}\rangle\subseteq D$ is correctly mapped to its representative entity $e_{k}\in E$. For the end-to-end task we identified two high-quality baselines and two popular deep neural network models that uses the same dataset from the entity disambiguation task. Given text without any annotated mentions each model returns a list of entity mentions and an associated Wikipedia entity. The details of each model are as follows: * • NER + Query: In this baseline method we use Stanford’s Named Entity Recognition package [55] to first perform entity mention detection, and then provide each discovered entity to Wikipedia’s search API to query each mention. We use the top ranked result as the predicted Wikipedia entity. * • NER + Prior: We use Stanford’s NER for entity mention detection and then pick the most frequently appearing Wikipedia entity. * • End-2-End [16]: We retrain the end-to-end model by adding Reddit dataset into test datasets. * • REL [52]: We use REL, which combines a state-of-the-art NER method flair [56] with the entity disambiguation approach of Le and Titov [13] to create an end- to-end technique. As before, we again measure the micro-precision, micro-recall, and micro-$F_{1}$ of predicted entities. Following the evaluation methodology used in Kolistas et al. we utilized strong matching, i.e., the mention and entity must match exactly to count as a correct prediction [16]. Strong matching also requires that the detected mentions are not allowed to overlap. One important difference between the end-to-end precision and the disambiguation precision is how missed predictions are counted. In the entity linking task, any prediction not in the ground truth counts as a miss, even though it may actually be a correct, albeit missing, prediction. Therefore, when we evaluate on the gold and silver Reddit datasets separately they have a lower precision than on the combined dataset. The recall value for end-to-end evaluation is simply the number of correct matches out of the total number of ground truth mention-entity pairs. We report the results of the end-to-end experiments at the bottom portion of Tables 5, 6, and 7 for the non-social, AIDA, and Reddit datasets respectively. We observe that existing models, even the deep neural network models, generally exhibit poor performance on the Reddit annotation dataset. We also observe that both of our baseline models perform remarkably worse in precision compared to the neural methods, but the recall measurements are comparable. We prefer to focus our attention on the recall measurements of these experiments because of the issues with precision previously discussed. However, it is clear that the neural models exhibit fairly high precision and fairly low recall rates on the combined dataset. These results indicate that the advanced neural models are rather conservative in their predictions. Comparing recall rates of social and non-social datasets, it appears that the existing models are unable to find the non-standard and non-popular entities that are a common occurrence in social data. Table 8: Analysis of the two different error types, entity errors (E) and mention errors (M), for end-to-end models on the Reddit dataset. | Model | Gold | | Silver | | Comb. ---|---|---|---|---|---|--- | E | M | | E | M | | E | M E2E EL | NER + Prior | 75 | 881 | | 135 | 2179 | | 210 | 3060 NER + Query | 61 | 881 | | 156 | 2179 | | 217 | 3060 End-2-End | 17 | 1048 | | 19 | 2369 | | 36 | 3418 REL | 70 | 803 | | 143 | 2027 | | 213 | 2830 Because of this poor performance of existing models, we were curious to investigate the primary cause of error. In general, we find two possible causes: mention detection error and disambiguation error. Mention detection errors occur when the model fails to detect an appropriate mention from the text. Disambiguation errors occur when a mention is correctly identified, but the wrong entity is linked. We report these two types of errors on the Reddit dataset in Table 8. These results indicate that the majority of the errors occur when the model fails to identify a mention. Although this is unsurprising given that these models were trained on traditional entity linking datasets, these results clearly indicate a need for better social media entity linking models. ## 4 Discussion and Implications To summarize, we return to the three research questions raised in the present work. First (RQ1), despite the open-ended nature of this task, we found a surprising agreement in the annotations. We showed that human annotators largely agree on entity labels in social media discourse. Human annotations on 619 posts found that all three users agreed on an average of one entity per post title, and two of the three annotators agreed on an additional two entities per post. We also provided some simple corrections to the raw annotations to further boost this agreement. We expect that the dataset constructed from this task, which is extracted from hundreds of social media threads and hand-annotated with hundreds of human annotators, will be useful for studying social commentary and discussion. The primary differences between the Reddit Entity Linking dataset and existing datasets are that (i) this Reddit dataset has many updated pop culture references, idioms, and memes that are not present in existings data sources; and (ii) compared to news sources, Wikipedia and other more-professional data sources used to create existing models, this Reddit dataset has significantly more noise, typos, unorthodox spellings, and abbreviations. These unique qualities make this new dataset a possible source for computational social scientists and computational linguists in their development of robust entity linking tools and in their study of social commentary. Second (RQ2), we show that standard datasets commonly used to to train entity linking models are not able to translate to social media data. The primary problem with the existing models is that they are typically trained on journalistic-quality datasets. However, social media data is far less polished and only sometimes conforms to traditional sentence structure and grammar. In addition, existing datasets do not contain the type of thread-structure that is common in social media discussions. Discussion threads (and the threading system in general) is a relatively new rhetorical mechanism with many undiscovered and poorly understood implications in natural language discourse. It is clear that an entity linking dataset annotated from social media data is needed so that automatic entity linking models can be developed and adapted to this large and growing corpus of human discourse. Third (RQ3), by performing an unbiased comparison of various entity linking models, we are able to discern which parts of the models are most responsible for errors in entity linking. Interestingly, we show that baseline models (i.e., simply querying the mention text) performs surprisingly well on the dataset. With that in mind, current works on mention detection and entity disambiguation tasks have been recently combined into end-to-end tasks are mostly based on the prior probability. The utility of a model tends to exist in the addition of local and global context information. However, for social media data, this technique is difficult because priors on social media data requires significant human annotation and can quickly become out of date. Comparing the results of the end-2-end model against the NER baselines and entity disambiguation baselines, we find that the majority of the error can be attributed to the mention detection sub-task. That is, identifying which word or words represent an entity mention in social media text appears to be very challenging. But once these mentions are identified, linking them to their proper entity is less challenging. Having identified these limitations with the state of the art and armed with the Reddit Entity Linking dataset, the goal of our future work will be to create better end-to-end entity linking models that operate on social media discourse. ## 5 Acknowledgements We thank Trenton Ford for helping to prepare this manuscript. This work is funded by the US Army Research Office (W911NF-17-1-0448) and the US Defense Advanced Research Projects Agency (DARPA W911NF-17-C-0094). ## References Cited * [1] M. De Gemmis, P. Lops, C. Musto, F. Narducci, G. Semeraro, Semantics-aware content-based recommender systems, in: Recommender Systems Handbook, Springer, 2015, pp. 119–159. * [2] M. Ghazvininejad, C. Brockett, M.-W. Chang, B. Dolan, J. Gao, W.-t. Yih, M. Galley, A knowledge-grounded neural conversation model, in: AAAI, 2018. * [3] X. Ren, M. Jiang, J. Shang, J. Han, Constructing structured information networks from massive text corpora, in: TheWebConf, 2017, pp. 951–954. * [4] M. Dredze, P. McNamee, D. Rao, A. Gerber, T. Finin, Entity disambiguation for knowledge base population, in: ACL, 2010, pp. 277–285. * [5] B. Shi, T. Weninger, Open-world knowledge graph completion, in: AAAI, 2018. * [6] E. Strubell, P. Verga, D. Andor, D. Weiss, A. McCallum, Linguistically-informed self-attention for semantic role labeling, in: EMNLP, 2018, pp. 5027–5038. * [7] B. Aktas, T. Scheffler, M. Stede, Anaphora resolution for twitter conversations: An exploratory study, NAACL-HLT (2018) 1. * [8] Y. Chen, L. Wu, M. J. Zaki, GraphFlow: Exploiting conversation flow with graph neural networks for conversational machine comprehension, in: ICJAI, pp. 1230–1236. doi:10.24963/ijcai.2020/171. * [9] L. Derczynski, D. Maynard, G. Rizzo, M. Van Erp, G. Gorrell, R. Troncy, J. Petrak, K. Bontcheva, Analysis of named entity recognition and linking for tweets, Information Processing & Management 51 (2) (2015) 32–49. * [10] V. Yadav, S. Bethard, A survey on recent advances in named entity recognition from deep learning models, in: COLING, 2018, pp. 2145–2158. URL https://www.aclweb.org/anthology/C18-1182 * [11] O.-E. Ganea, T. Hofmann, Deep joint entity disambiguation with local neural attention, in: EMNLP, Copenhagen, Denmark, 2017, pp. 2619–2629. doi:10.18653/v1/D17-1277. URL https://www.aclweb.org/anthology/D17-1277 * [12] C. Ran, W. Shen, J. Wang, An Attention Factor Graph Model for Tweet Entity Linking, in: TheWebConf, pp. 1135–1144. doi:10.1145/3178876.3186012. URL https://doi.org/10.1145/3178876.3186012 * [13] P. Le, I. Titov, Improving Entity Linking by Modeling Latent Relations between Mentions, in: ACL, Melbourne, Australia, 2018, pp. 1595–1604. doi:10.18653/v1/P18-1148. URL http://aclweb.org/anthology/P18-1148 * [14] P. Le, I. Titov, Boosting entity linking performance by leveraging unlabeled documents, in: ACL, pp. 1935–1945. doi:10.18653/v1/P19-1187. URL https://www.aclweb.org/anthology/P19-1187 * [15] S. Shimaoka, P. Stenetorp, K. Inui, S. Riedel, Neural architectures for fine-grained entity type classification, in: ACL, pp. 1271–1280. URL https://www.aclweb.org/anthology/E17-1119 * [16] N. Kolitsas, O.-E. Ganea, T. Hofmann, End-to-end neural entity linking, in: CoNLL, 2018, pp. 519–529. * [17] A. Ritter, S. Clark, O. Etzioni, et al., Named entity recognition in tweets: an experimental study, in: EMNLP, 2011, pp. 1524–1534. * [18] X. Ling, D. S. Weld, Fine-grained entity recognition, in: AAAI, 2012. * [19] D. Gillick, N. Lazic, K. Ganchev, J. Kirchner, D. Huynh, Context-dependent fine-grained entity type tagging, arXiv preprint arXiv:1412.1820. * [20] M. Dredze, N. Andrews, J. DeYoung, Twitter at the grammys: A social media corpus for entity linking and disambiguation, in: Proceedings of The Fourth International Workshop on Natural Language Processing for Social Media, 2016, pp. 20–25. * [21] E. Meij, W. Weerkamp, M. De Rijke, Adding semantics to microblog posts, in: WSDM, 2012, pp. 563–572. * [22] A. Mittos, S. Zannettou, J. Blackburn, E. D. Cristofaro, Analyzing genetic testing discourse on the web through the lens of twitter, reddit, and 4chan, ACM Transactions on the Web 14 (4) (2020) 1–38. * [23] J. Choi, J. Yoon, J. Chung, B.-Y. Coh, J.-M. Lee, Social media analytics and business intelligence research: A systematic review, Information Processing & Management 57 (6) (2020) 102279. * [24] A. Park, M. Conway, A. T. Chen, Examining thematic similarity, difference, and membership in three online mental health communities from reddit: a text mining and visualization approach, Computers in human behavior 78 (2018) 98–112. * [25] M. Yoo, S. Lee, T. Ha, Semantic network analysis for understanding user experiences of bipolar and depressive disorders on reddit, Information Processing & Management 56 (4) (2019) 1565–1575. * [26] A. Zirikly, P. Resnik, O. Uzuner, K. Hollingshead, Clpsych 2019 shared task: Predicting the degree of suicide risk in reddit posts, in: Proceedings of the Sixth Workshop on Computational Linguistics and Clinical Psychology, 2019, pp. 24–33. * [27] E. Turcan, K. McKeown, Dreaddit: A reddit dataset for stress analysis in social media, in: Proceedings of the Tenth International Workshop on Health Text Mining and Information Analysis (LOUHI 2019), Association for Computational Linguistics, pp. 97–107. doi:10.18653/v1/D19-6213. * [28] M. Khodak, N. Saunshi, K. Vodrahalli, A large self-annotated corpus for sarcasm, in: Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), European Language Resources Association (ELRA). * [29] S. Dutta, D. Das, T. Chakraborty, Changing views: Persuasion modeling and argument extraction from online discussions, Information Processing & Management 57 (2) (2020) 102085. * [30] M. Thelwall, E. Stuart, She’s reddit: A source of statistically significant gendered interest information?, Information processing & management 56 (4) (2019) 1543–1558. * [31] K. B. Enes, P. P. V. Brum, T. O. Cunha, F. Murai, A. P. C. da Silva, G. L. Pappa, Reddit weight loss communities: do they have what it takes for effective health interventions?, in: 2018 IEEE/WIC/ACM International Conference on Web Intelligence, IEEE, 2018, pp. 508–513. * [32] M. Glenski, E. Saldanha, S. Volkova, Characterizing speed and scale of cryptocurrency discussion spread on reddit, in: TheWebConf, 2019, pp. 560–570. * [33] L. Manikonda, G. Beigi, H. Liu, S. Kambhampati, Twitter for Sparking a Movement, Reddit for Sharing the Moment: #metoo through the Lens of Social Media, in: SBP-BRiMS. * [34] S. Priya, R. Sequeira, J. Chandra, S. K. Dandapat, Where should one get news updates: Twitter or Reddit 9 17–29. doi:10.1016/j.osnem.2018.11.001. URL http://www.sciencedirect.com/science/article/pii/S2468696418300338 * [35] W. Shen, J. Wang, J. Han, Entity linking with a knowledge base: Issues, techniques, and solutions, IEEE Transactions on Knowledge and Data Engineering 27 (2) (2014) 443–460. * [36] O. Sevgili, A. Shelmanov, M. Arkhipov, A. Panchenko, C. Biemann, Neural entity linking: A survey of models based on deep learning, arXiv preprint arXiv:2006.00575. * [37] L. Logeswaran, M.-W. Chang, K. Lee, K. Toutanova, J. Devlin, H. Lee, Zero-shot entity linking by reading entity descriptions, in: ACL, Association for Computational Linguistics, pp. 3449–3460. doi:10.18653/v1/P19-1335. URL https://www.aclweb.org/anthology/P19-1335 * [38] Y. Onoe, G. Durrett, Fine-grained entity typing for domain independent entity linking., in: AAAI, 2020, pp. 8576–8583. * [39] J. Raiman, O. Raiman, Deeptype: Multilingual entity linking by neural type system evolution, in: AAAI, 2018. * [40] Z. Fang, Y. Cao, Q. Li, D. Zhang, Z. Zhang, Y. Liu, Joint entity linking with deep reinforcement learning, in: TheWebConf, 2019, pp. 438–447. * [41] X. Yang, X. Gu, S. Lin, S. Tang, Y. Zhuang, F. Wu, Z. Chen, G. Hu, X. Ren, Learning dynamic context augmentation for global entity linking, in: EMNLP, 2019, pp. 271–281. * [42] W. Shen, J. Wang, P. Luo, M. Wang, Linking named entities in tweets with knowledge base via user interest modeling, in: SIGKDD, 2013, pp. 68–76. * [43] S. Guo, M.-W. Chang, E. Kiciman, To link or not to link? a study on end-to-end tweet entity linking, in: NAACL-HLT, 2013, pp. 1020–1030. * [44] Y. Fang, M.-W. Chang, Entity linking on microblogs with spatial and temporal signals, Transactions of the Association for Computational Linguistics 2 (2014) 259–272. * [45] J. Devlin, M.-W. Chang, K. Lee, K. Toutanova, Bert pre-training of deep bidirectional transformers for language understanding, in: NAACL-HLT (1), 2019\. * [46] S. Broscheit, Investigating entity knowledge in bert with simple neural end-to-end entity linking, in: CoNLL, 2019, pp. 677–685. * [47] I. Yamada, A. Asai, H. Shindo, H. Takeda, Y. Matsumoto, LUKE: Deep contextualized entity representations with entity-aware self-attention, in: EMNLP, 2020, pp. 6442–6454. URL https://www.aclweb.org/anthology/2020.emnlp-main.523 * [48] H. Rosales-Méndez, A. Hogan, B. Poblete, Fine-Grained Evaluation for Entity Linking, in: EMNLP, Association for Computational Linguistics, pp. 718–727. doi:10.18653/v1/D19-1066. URL https://www.aclweb.org/anthology/D19-1066 * [49] K. Bontcheva, L. Derczynski, I. Roberts, Crowdsourcing Named Entity Recognition and Entity Linking Corpora, in: N. Ide, J. Pustejovsky (Eds.), Handbook of Linguistic Annotation, Springer Netherlands, Dordrecht, 2017, pp. 875–892. doi:10.1007/978-94-024-0881-2_32. URL http://link.springer.com/10.1007/978-94-024-0881-2_32 * [50] I. H. Witten, D. N. Milne, An effective, low-cost measure of semantic relatedness obtained from wikipedia links. * [51] I. Yamada, H. Shindo, H. Takeda, Y. Takefuji, Joint learning of the embedding of words and entities for named entity disambiguation, in: CoNLL, 2016, pp. 250–259. * [52] J. M. van Hulst, F. Hasibi, K. Dercksen, K. Balog, A. P. de Vries, Rel: An entity linker standing on the shoulders of giants, in: SIGIR, New York, NY, USA, 2020, p. 2197–2200. doi:10.1145/3397271.3401416. URL https://doi.org/10.1145/3397271.3401416 * [53] J. Hoffart, M. A. Yosef, I. Bordino, H. Fürstenau, M. Pinkal, M. Spaniol, B. Taneva, S. Thater, G. Weikum, Robust disambiguation of named entities in text, in: EMNLP, 2011, pp. 782–792. * [54] G. Luo, X. Huang, C.-Y. Lin, Z. Nie, Joint entity recognition and disambiguation, in: EMNLP, 2015, pp. 879–888. * [55] P. Qi, Y. Zhang, Y. Zhang, J. Bolton, C. D. Manning, Stanza: A python natural language processing toolkit for many human languages, in: ACL, Online, 2020, pp. 101–108. doi:10.18653/v1/2020.acl-demos.14. URL https://www.aclweb.org/anthology/2020.acl-demos.14 * [56] A. Akbik, D. Blythe, R. Vollgraf, Contextual string embeddings for sequence labeling, in: ACL, 2018, pp. 1638–1649.
8k
arxiv_papers
2101.01229
# A Survey on Embedding Dynamic Graphs Claudio D. T. Barros National Laboratory for Scientific Computing (LNCC), Petrópolis, RJ, Brazil Matheus R. F. Mendonça National Laboratory for Scientific Computing (LNCC), Petrópolis, RJ, Brazil Alex B. Vieira Federal University of Juiz de Fora (UFJF), Juiz de Fora, MG, Brazil Artur Ziviani National Laboratory for Scientific Computing (LNCC), Petrópolis, RJ, Brazil ###### Abstract Embedding static graphs in low-dimensional vector spaces plays a key role in network analytics and inference, supporting applications like node classification, link prediction, and graph visualization. However, many real- world networks present dynamic behavior, including topological evolution, feature evolution, and diffusion. Therefore, several methods for embedding dynamic graphs have been proposed to learn network representations over time, facing novel challenges, such as time-domain modeling, temporal features to be captured, and the temporal granularity to be embedded. In this survey, we overview dynamic graph embedding, discussing its fundamentals and the recent advances developed so far. We introduce the formal definition of dynamic graph embedding, focusing on the problem setting and introducing a novel taxonomy for dynamic graph embedding input and output. We further explore different dynamic behaviors that may be encompassed by embeddings, classifying by topological evolution, feature evolution, and processes on networks. Afterward, we describe existing techniques and propose a taxonomy for dynamic graph embedding techniques based on algorithmic approaches, from matrix and tensor factorization to deep learning, random walks, and temporal point processes. We also elucidate main applications, including dynamic link prediction, anomaly detection, and diffusion prediction, and we further state some promising research directions in the area. Keywords: dynamic networks, graph embedding, graph representation learning, dynamic graphs, dynamic graph embedding. ## 1 Introduction Graph-structured networks arise naturally in several real-world complex systems, including social networks, biological networks, knowledge graphs, and finance, whose interactions between nodes allow the understanding of the structural information of these domains [1]. Therefore, graph-aware learning tasks play a key role in machine learning and network science literature, and scalable approaches to deal with these high-dimensional non-Euclidean data have been explored to address the computational challenges associated with graph data-driven analytics and inference. Embedding graphs in low-dimensional vector spaces have been applied to extract features from networks and encoding topological and semantic information, and many researchers have been using these network representation learning approaches for several applications, including node classification and clustering, link prediction, and network visualization [2, 3, 4]. Previous work on network representation learning has focused on static graphs, either representing existing connections at a fixed moment (i.e., a graph snapshot), or node interactions over time that are aggregated into a single static graph [5]. However, several real networks display dynamic behavior, including nodes and edges being added or removed from the system [6], labels and other properties changing over time [7], and diffusion in the network [8]. The network temporal correlations are lost during the aggregating process and, in this sense, approaches to develop embedding methods for dynamic networks have been proposed over the past few years. These efforts have improved tasks such as link prediction [9] and node classification [10] over time, while enabling novel applications, including event prediction [8], anomaly detection [10] and diffusion prediction [11]. Several challenges arise when developing an approach to embed dynamic graphs, as (i) how to model the time domain, i.e. discrete-time or continuous-time, (ii) which dynamic behaviors are desired to be captured, and (iii) which temporal granularity will be represented in the vector space, i.e. the same granularity as the dataset, or a coarser granularity summarizing dynamics in a finer timescale. Considering the increasing number of studies proposing dynamic graph embedding techniques, these discussions are becoming more important to advance the comprehension of dynamic network representation learning. Therefore, in this survey, we overview the problem of embedding dynamic graphs, discussing its fundamental aspects and the recent advances that have been made so far. We introduce the formal definition of dynamic graph embedding, discussing different dynamic network models whose representations may be extracted, and introducing a novel taxonomy for the problem settings, i.e. embeddings input and output. Moreover, we explore and classify different dynamic behaviors that may be captured by embeddings, describe existing methods, discuss their similarities and differences, and propose a detailed taxonomy based on algorithmic approaches. To the best of our knowledge, a few attempts have been made so far to survey dynamic graph embeddings. Kazemi et al. [12] focus on recent representation learning techniques for dynamic graphs by using an encoder-decoder framework. Xie et al. [13] propose a taxonomy based on algorithmic approaches to encode graphs. Additionally, Skarding et al. [14] survey how the dynamic network topology can be modeled using dynamic graph neural networks. Our work is different from the aforementioned surveys since we discuss different dynamic network models that have been or may be used for embedding, in addition to detecting temporal behaviors in networks that can be captured. Moreover, we extend dynamic graph embedding techniques taxonomy, encompassing methods based on graph kernels, temporal point processes, and agnostic methods. In this sense, this survey has the following contributions: * • A taxonomy of dynamic graph embedding based on problem settings, extending graph embedding input and output to handle temporal heterogeneity (i.e. timestamps with labels, classifying network behavior over time) and temporal embeddings (i.e. different temporal granularities to represent in the low- dimensional vector space); * • A discussion about different dynamic behaviors in networks that embedding models may capture, including topological evolution (concerning both node and edge addition or removal), feature evolution (regarding changes of nodes/edges features or labels over time) and processes on networks (diffusion and global role of nodes and its evolution). Furthermore, we also bring some perspectives about temporal point processes on networks. * • A detailed analysis of embedding techniques for dynamic graphs, focused on a classification concerning their algorithmic approaches, comparing different methods proposed in the literature and discussing their similarities, differences and other particularities; * • The categorization, according to the topological structure, of several dynamic graph embedding applications, focused on: node related tasks, edge related tasks, node, and edge related tasks, and graph-related tasks; * • A discussion of future research directions in the area in terms of problem settings, solution techniques, and modeling, in addition to applications and representation learning on generalized graphs (i.e. hypergraphs and higher- order graphs). The remainder of this survey is organized as follows. In Section 2, we introduce the fundamentals behind the embedding of dynamic graphs, reviewing static graph embedding, defining the problem of dynamic graph embedding, presenting different dynamic graph models explored in the embedding scenario, along with other problem settings, including the dynamic graph embedding input and output as well as the dynamic behaviors that may be captured. In Section 3, we categorize the literature based on the embedding techniques, unraveling insights behind each paradigm and we provide a detailed comparison of different techniques. After that, we present in Section 4 some concrete examples of applications enabled by dynamic graph embedding methods discussed in Section 3, allowing the reader to better grasp the practical utility of these methods. Finally, our conclusions are presented in Section 5, alongside some discussions on potential future research directions in the field of dynamic graph embedding. ## 2 Fundamentals Behind the Embedding of Dynamic Graphs In this section, we first review basic graph concepts and static graph embedding, introducing the definition of dynamic graphs. Then, we formally define the dynamic graph embedding problem. Thereafter, we describe possible problem settings, starting with dynamic graph embedding input and detailing different outputs. We expand the initial graph embedding concepts considering time-varying graph models, time granularity, and temporal aggregation, and discuss dynamic behaviors that may be captured by embeddings. ### 2.1 Graphs and Static Graph Embedding A graph $G=(V,E)$ is a mathematical structure, where $V=\\{v_{1},...,v_{N}\\}$ is a finite set of $N$ nodes (vertices), and $E\subseteq\\{(v_{i},v_{j})\,|\,(v_{i},v_{j})\in V\times V\\}$ is a finite set of unordered pairs of vertices, whose elements $e_{ij}=(v_{i},v_{j})$ are called edges (links). The adjacency matrix $A$ of a graph is an $N\times N$ matrix whose element $A_{ij}=1$ if edge $e_{ij}\in E$, or $A_{ij}=0$ otherwise. A directed graph is a graph in which an edge $e_{ij}\in E$ is an ordered pair, i.e. the edge $e_{ij}$ is oriented. Otherwise, the graph is undirected. A weighted graph is a graph in which a weight function $w:E\rightarrow\mathbb{R}$ is assigned to it. Each edge has a weight associated with it, and it is possible to define a weight matrix $W$ such that $W_{ij}=w(e_{ij})$. Otherwise, the graph is unweighted. A homogeneous graph is a graph in which the number of node types $\mathcal{L}^{n}$ and the number of edge types $\mathcal{L}^{e}$ is 1, i.e. $|\mathcal{L}^{v}|=|\mathcal{L}^{e}|=1$, and every node in $G$ belongs to a single node category and every edge belongs to a single edge category. A heterogeneous graph is a graph in which $|\mathcal{L}^{v}|>1$ or $|\mathcal{L}^{e}|>1$. Different ways to define proximity or similarity between nodes in a graph may be conceived [3]. The first-order proximity $S_{ij}^{(1)}$ between nodes $v_{i}$ and $v_{j}$ is the weight of the edge $e_{ij}$, i.e., $W_{ij}$. The second-order proximity $S_{ij}^{(2)}$ between nodes $v_{i}$ and $v_{j}$ is a similarity between $v_{i}$’s neighbourhood $S_{i}^{(1)}$ and $v_{j}$’s neighbourhood $S_{j}^{(1)}$ given by some defined metric, where $S_{i}^{(1)}=[S_{i1}^{(1)},...,S_{iN}^{(1)}]$ and $S_{j}^{(1)}=[S_{j1}^{(1)},...,S_{jN}^{(1)}]$. Higher-order proximities can be defined as well, including the Katz centrality [15], which is a weighted summation over the paths between two vertices in the graph whose weight is an exponential decay function of its length, and the Adamic/Adar Index [16], which counts the number of vertices connecting two nodes taking into account a weight depending on the reciprocal of the neighbor’s degree. The central idea behind graph embedding lies in learning a mapping that embeds nodes, edges, subgraphs, or even entire graphs, in a low-dimensional vector space, where the embedding dimension is expected to be much lower than the total number of nodes in the network. More specifically, given a graph $G=(V,E)$, and a predefined embedding dimension $d$, such that $d\ll|V|$, the problem of graph embedding is to map $G$ into a $d$-dimensional space, in which graph properties are preserved as much as possible, i.e. topology and similarity measures [2, 3, 4]. Based on the output of the graph embedding, four categories may be defined: (i) node embedding, where vector embeddings are learned for each node; (ii) edge embedding, where edges are mapped into the embedding space; (iii) substructure embedding, in which subgraphs (i.e. clusters, communities, graphlets, …) are represented in the vector space; and (iv) whole-graph embedding, i.e. an entire graph is mapped into a single vector. [3] (see Fig. 1). (a) Graph $G$ (b) Node Embedding (c) Edge Embedding (d) Substructure Embedding (e) Whole-Graph Embedding Figure 1: A toy example of embedding a graph into 2D space taking into account different granularities. (a) Sample network used as a reference for graph embedding, where the different node colors depict different substructures, and different edge colors depict intra-substructure and inter-substructure connections. (b-e) Different static graph embedding outputs, as described by Cai et al. [3], where $d=2$. Note that the colors refer to the substructures and connections displayed in (a). Static graphs embedding taxonomies have been proposed in the past few years [4, 3]. Graph embedding based on matrix factorization represents some graph similarity in the form of a matrix and factorizes this matrix to obtain a node embedding. The problem of graph embedding is treated as a structure-preserving dimensionality reduction problem, which assumes the input data lie in a low dimensional manifold. Approaches based on deep learning apply deep neural architectures on graphs, including autoencoders (AEs), convolutional neural networks (CNNs), and variational autoencoders (VAEs). Random walk approaches generate node sequences from a graph to create contexts for each node, then applying techniques from natural language processing for learning embeddings. They try to preserve higher-order proximity between nodes by maximizing the probability of occurrence of subsequent nodes in fixed-length random walks, using neural language models, such as SkipGram. Cai et al. [3] further suggest other paradigms, such as edge reconstruction based optimization, which learns representations that directly optimize either edge reconstruction probability or edge reconstruction loss; graph kernel-based methods, which decompose the graph into atomic substructures (as graphlets and subtrees) and build a vector using these features; and generative models, which specify the joint distribution of the input features and the class labels conditioned on a set of parameters. Several further discussions about static graph embeddings were presented in other surveys and reviews [2, 17, 18], along with some works concerning knowledge graph embedding, specifying their analysis to tasks including knowledge graph completion and relation extraction [19, 20]. Moreover, Goyal developed GEM [4], an open-source Python library that provides a framework for graph embedding implementation, and Grattarola and Alippi have presented Spektral [21], another open-source Python library for building graph neural networks with Keras API and TensorFlow 2, handling tasks such as node classification, link prediction, and graph generation. ### 2.2 Dynamic Graphs There are different mathematical formulations to describe interactions between nodes over the lifetime of a system [6, 22, 23]. A possible definition is to describe a dynamic graph by a mathematical structure $\mathcal{G}=(\mathcal{V},\mathcal{E},\mathcal{T})$, where $\mathcal{V}=\\{V(t)\\}_{t\in\mathcal{T}}\,$ is a collection of node sets over time, $\mathcal{E}=\\{E(t)\\}_{t\in\mathcal{T}}\,$ is a collection of edge sets over time, and $\mathcal{T}$ is the time span. For each $t\in\mathcal{T}$, it is possible to define a graph snapshot $G(t)=(V(t),E(t))$, i.e. a static graph representing a fixed timestamp $t$ of the dynamic graph. Adjacency matrix $A(t)$, weight matrix $W(t)$ and similarity matrix $S(t)$ are now time-dependent, and can be calculated for each snapshot $G(t)$, as well as node types $\mathcal{L}^{n}(t)$ and edge types $\mathcal{L}^{e}(t)$. Casteigts et al. [6] alternatively defined a dynamic graph as $\mathcal{G}=(V,E,\mathcal{T},\rho_{v},\rho_{e})$, where $V$ is a node set containing every node that is present in the network at any given time $t\in\mathcal{T}$, $E$ is an edge set defined similarly, and further defining a node presence function $\rho_{v}:V\times\mathcal{T}\rightarrow\\{0,1\\}$, indicating whether a given node $v\in V$ is available at a given time $t\in\mathcal{T}$, and an edge presence function $\rho_{e}:E\times\mathcal{T}\rightarrow\\{0,1\\}$, specifying if a given edge $e\in E$ exists at a timestamp $t\in\mathcal{T}$. Figure 2 depicts a dynamic network as a time-varying graph, containing 9 nodes along lifetime $\mathcal{T}=[0,7)$. It is noteworthy that, at the beginning of the network lifetime, only nodes A, B, C, and D are present, as well as links (A,B), (A,C), (B,C), and (B,D). As time passes, new nodes and edges arrive, even as nodes and edges are removed from the system. In the end, we have nodes B, E, F, G, H, and I, and links (B,E), (B,F), (E,G), and (H,I). The formulations we described above are sufficient for understanding the dynamic graph embedding methods we review in this paper. It is important to mention that dynamic graphs can present even more complex temporal patterns, such as latency [6] (i.e. nodes/edges not arising instantaneously in the network, instead of taking a finite time interval to be established) and spatial-temporal edges [22] (i.e. a node at a given timestamp connected to another node at another timestamp). In Sec. 5, we propose future directions for embedding dynamic graphs concerning these properties. Figure 2: A representation of a small dynamic network, showing edge presence (continuous intervals above edges) and node presence (bold continuous intervals next to nodes) intervals. ### 2.3 Dynamic Graph Modeling One of the first aspects to be considered when modeling a dynamic network is to define its life span $\mathcal{T}$. Two different approaches may be adopted to model the system’s time domain: discrete-time approaches, where $\mathcal{T}$ is a discrete set, hence the evolution of a dynamic graph can be described by a sequence of static graphs, with a fixed timestamp; and continuous-time approach, where $\mathcal{T}$ is a continuous set, therefore the evolution is modeled at a finer temporal granularity to encompass different events in real time [8]. Computationally, dynamic graph models assuming discrete-time domain are easier to manipulate. Most of the existing embedding methods are based on this approach [10]. However, some authors have proposed to model more sophisticated phenomena, such as stochastic events, to leverage applications such as event time prediction (i.e. predict when an edge or a node is created or removed from a network) [8, 24]. Therefore, these approaches must rely on continuous-time lifespan to capture temporal evolution at an appropriate granularity. We briefly discuss some dynamic graph models covered by embedding methods presented in this survey. #### 2.3.1 Graph Snapshots This model represents a dynamic graph as a list of static graphs, i.e. $\mathcal{G}=\\{{G}(t_{0}),...,G(t_{N_{S}-1})\\}$, where $G(t_{k})=(V(t_{k}),E(t_{k}))$ is a static graph with timestamp $t_{k}$ ($k\in\\{0,...,N_{S}-1\\})$, $N_{S}$ is the number of snapshots, $V(t_{k})$ is the node set at timestamp $t_{k}$ and $E(t_{k})$ is the edge set including all edges within the period $[t_{k},t_{k+1})$ [25]. Most of the methods for embedding dynamic graphs manage this model as the input, either by adopting directly a sequence of successive state sub-graphs that represent the network in a discrete way as time passes [26, 10, 27], or splitting the time domain into non-overlapping windows of fixed duration, establishing a static graph for each window [28, 29]. #### 2.3.2 Difference Network Models In many real problems, the number of edges inserted or removed at any given time is much smaller than the total number of edges. i.e. the topological evolution is sparse [30, 31]. These models representing these network changes take as input an initial graph $G_{t_{0}}$, and a list of adjacency matrix changes $\Delta\mathcal{A}=\\{\Delta A(t_{1}),...,\Delta A(t_{N_{R}-1})\\}$, where $\Delta A(t_{k})=A(t_{k})-A(t_{k-1})$, and $N_{R}$ is the total number of recorded timestamps. This definition may be extended for other similarity matrices [32, 7], and difference networks may be divided into a link formation network (concerning positive values of adjacency matrix change) and a link dissolution network (regarding negative values of adjacency matrix change) [33, 29]. Note that these models do not handle nodes being added or removed from the network, as they rely on matrices with a fixed dimensionality. #### 2.3.3 Continuous-Time Network Models Continuous-time approaches may include timestamped edges (edges with the information about the time they were created, or the time intervals concerning their existence in the network) [34] and link streams (a list of node interactions over time) [23]. Events in the network, such as the creation and removal of nodes and edges, may occur in any time $t\in\mathcal{T}$, and maybe instantaneous (i.e. much faster than the typical temporal granularity of the system) or may be assigned with a latency [6]. Several dynamic graph embedding methods rely on continuous-time networks, either by modeling timestamped edges as stochastic point processes [24] or leveraging link streams [35]. Furthermore, node arrival and removal may be included by these network models, as proposed in the literature by stream graphs [23] and appeared in some embedding techniques [8, 36]. ### 2.4 Temporal Point Processes on Graphs As discussed above, nodes and edges network churn may be modeled by stochastic events in a continuous-time domain, normally as stochastic point processes, random processes whose realization is comprised of discrete events in a continuous time. A temporal point process is a point process that can be represented as a counting process $N(t)$, recording the number of events up to time $t$, thus being useful for modeling sequential asynchronous discrete events occurring in continuous time [12]. The conditional intensity function $\lambda(t)$ characterizes a temporal point process such that $\lambda(t)\,\Delta t$ is the conditional probability of observing an event in the tiny window $[t,t+\Delta t)$ given the network history, i.e., all events before $t$, and only one event can happen in this tiny interval $\Delta t$. Similarly, a survival function $S(t)$ determines the conditional probability that no event happens during a time window $[t,t+\Delta t)$ given the network history, and the conditional density $f(t)=\lambda(t)S(t)$ for an event that occurs at time $t$ is further defined as well. The functional form of the intensity $\lambda(t)$ is often designed to capture the phenomena of interests, some of them include Poisson Process, Hawkes Process, Self-Correcting Process, Power Law, or Rayleigh Process [37]. Many dynamic graph embedding techniques consider that interactions between nodes are stochastic processes whose probabilities depend on the topological structure of the network, and node features (if applicable) at each timestamp [8, 38, 39, 40, 41, 42]. ### 2.5 Dynamic Graph Embedding Input In addition to the dynamic network modeling, discussed in Sec. 2.3, dynamic graphs can be (i) homogeneous, in which only topological information over time is available, (ii) heterogeneous, in which either nodes, edges (topological heterogeneity) or timestamps (temporal heterogeneity) are assigned with labels, (iii) attributed (or with additional information), where nodes and edges may hold several different features, and (iv) constructed from non- relational data (see Fig. 3). This proposed taxonomy extends Cai et al. [3], who encompassed static graph embedding input considering static networks, without leveraging dynamic aspects neither handling the difference between topological and temporal heterogeneity. In the following, we discuss each dynamic graph embedding input shown in Figure 3. Figure 3: The proposed taxonomy for dynamic graph embedding input, an extension of Cai et al. [3] to encompass dynamic networks and to consider topological heterogeneity (similar to static networks) and temporal heterogeneity (i.e. timestamps having labels). #### 2.5.1 Dynamic Homogeneous Graph Undirected and unweighted homogeneous graphs are widely used as dynamic graph embedding inputs due to their simplicity and to handle only basic structural information over time [43]. Several embedding methods, however, are proposed to handle weighted [44, 45] and directed dynamic graphs [10]. #### 2.5.2 Dynamic Heterogeneous Graph Embedding methods to handle topological heterogeneity, i.e. nodes or edges having labels, are usually concerned with node and edge classification [46, 47]. Nevertheless, graph snapshots may have labels, characterizing different global behavior [48, 49] and describing another type of heterogeneity, which we have named temporal heterogeneity. #### 2.5.3 Dynamic Graph with Additional Information Additional attributes may be assigned to nodes, such as a set of numerical or categorical features. It is possible to define a time-dependent node feature matrix $F(t)\in\mathbb{R}^{N\times f}$, where $f$ is the number of additional node features, and learn representations leveraging these features in addition to their topological structure [7, 50]. Although an edge feature matrix would be defined as well, its usage is much less common. #### 2.5.4 Dynamic Graph Constructed from Non-Relational Data Non-relational time series data can be transformed into a dynamic graph by defining a similarity measure between two data instances, and constructing a similarity matrix $S(t)$ afterward. Several papers use this step as an intermediate to learn vector representations from this constructed graph to support some task-driven application, such as traffic forecasting [51], predicting bike-sharing demand [52], predicting social events [53] and missing label classification on videos [54]. ### 2.6 Problem Formulation and Output for Dynamic Graph Embedding It is important to mathematically formulate dynamic graph embedding to understand its outputs. Given a dynamic graph $\mathcal{G}=(\mathcal{V},\mathcal{E},\mathcal{T})$, where $\mathcal{V}=\\{V(t)\\}_{t\in\mathcal{T}}$ and $\mathcal{E}=\\{E(t)\\}_{t\in\mathcal{T}}$, and an embedding dimension $d$, the problem of embedding dynamic graph is regarded as learning how to map $\mathcal{G}$ into a $d$-dimensional vector space over time, in which both topological information and temporal dependencies of the network are captured, either by learning representations able to reconstruct the dynamic graph $\mathcal{G}$, to predict the behavior of the network at timestamps outside the lifespan $\mathcal{T}$, or to directly handle a task-driven application such as node classification. When the graph topology evolves, two possible interpretations are possible for the evolution of embeddings: (i) the vector representations move along the embedding space, making it possible to trace the trajectory of each node; or (ii) the embedding space itself evolves in time, thus being possible to learn mappings between embedding spaces in consecutive timestamps [43]. The time-domain of vector representations and the network does not need to be identical, i.e., $\mathbb{T}\neq\mathcal{T}$. For instance, a dynamic graph may have daily information about interactions between users in a social network, but the network analytics and inference are more interested in capturing weekly or even monthly features. Hence, even though the network life span is given by daily timestamps, vector representations are extracted for a coarser temporal granularity. Therefore, to define different dynamic graph embedding outputs, it is important to separate between (i) topological embedding, which is similar to the definitions for static graph embedding [3] and concerns node embedding, edge embedding, substructure embedding, and graph snapshot embedding, all of them over time, and (ii) temporal embedding, regarding the relation between network temporal granularity given by $\mathcal{T}$ and embedding temporal granularity given by $\mathbb{T}$. The complete classification we propose is shown in Figure 4 and is further discussed in the following topics. Figure 4: Dynamic graph embedding output taxonomy proposed in this survey. #### 2.6.1 Temporal Embedding Temporal embedding concerns the relationship between the input temporal domain $\mathcal{T}$ and the output temporal domain $\mathbb{T}$. Defining $T(\mathcal{G})$ as a set compressing a topological property of the dynamic graph (i.e. node set, edge set, induced subgraph set or entire graph, as described later in Section 2.6.2), one may distinguish between three different classifications for temporal embedding, as shown in Figure 4 and further illustrated in Figure 5. ###### Definition 1. An embedding over time is a mapping $\mu_{t}:T(\mathcal{G})\times\mathcal{T}\to\mathbb{R}^{d}\times\mathcal{T}$ (i.e. $\mathbb{T}=\mathcal{T}$). Therefore, each node/edge/substructure/graph at time $t\in\mathcal{T}$ is represented as a vector in a low dimensional space over $t$. In this type of temporal embedding, the mapping $\mu_{t}$ is a bijection in time, allowing to distinguish, in the vector space, each time instant for each mapped entity of the network (Figure 5(b) shows an example). ###### Definition 2. A time-grouping embedding is a mapping $\mu_{g}:T(\mathcal{G})\times\mathcal{T}\to\mathbb{R}^{d}\times\mathbb{T}$, where $\mathbb{T}$ is a discrete set composed of elements that aggregate timestamps or time intervals of $\mathcal{T}$. Therefore, instead of representing each node/edge/substructure/graph at every time $t\in\mathcal{T}$, they are represented at every time aggregate $t^{\prime}\in\mathbb{T}$. Time aggregates are useful for representing networks whose desired time information is of a different granularity than that available in the data. As discussed, vector representations may be required each week from data obtained daily. Figure 5(c) contains another example, where two timestamps have been aggregated into a single one. ###### Definition 3. A whole-time embedding is a mapping $\mu_{w}:T(\mathcal{G})\times\mathcal{T}\to\mathbb{R}^{d}$, i.e. $\mathbb{T}$ is a unitary set indicating that every timestamp is aggregated into a single point. Therefore, each node/edge/substructure/graph at every time $t\in\mathcal{T}$ is represented as a single vector in $\mathbb{R}^{d}$. Temporal aggregation can be performed: (i) as a step before embedding, aggregating temporal data before applying an embedding method; or (ii) after representation learning over the dynamic graph, usually by performing operations in the vector space (e.g. weighted averages or non-linear functions) to obtain lower temporal granularity representations (i.e. embedding daily network data to extract weekly or even monthly network representations). (a) Dynamic graph $\mathcal{G}$ (b) Embedding dynamic graph (c) Time-grouping embedding (d) Whole-time embedding Figure 5: A toy example of embedding each node of a dynamic graph into 2D space taking into account different temporal granularities. (a) Sample network used as a reference for dynamic node embedding, where the snapshot model is considered, different node colors are used to distinguish each of them, and the four timestamps $t_{0}$ to $t_{3}$ are grouped into two time aggregates $T_{1}$ (holding $t_{0}$ and $t_{1}$) and $T_{2}$ (which holds $t_{2}$ and $t_{3}$). (b) In embedding over time, each node is mapped into the low- dimensional space for each timestamp, thus describing trajectories in embedding space. These trajectories are illustrated by arrows, indicating the time flow. In this case, $\mathcal{T}=\mathbb{T}_{a}=\\{t_{0},t_{1},t_{2},t_{3}\\}$. (c) Although time granularity of the network is described by $\mathcal{T}$, the graph is mapped into a different time granularity $\mathbb{T}_{b}=\\{T_{1},T_{2}\\}$, where each new timestamp aggregates temporal information about the original timestamps. The representations were defined from embedding over time handcrafted design, and (d) In the whole-time aggregation embedding, $\mathbb{T}_{c}$ is a unitary set, and every timestamp is aggregated into a single representation for each node. #### 2.6.2 Topological Embedding Embedding topological properties of a dynamic graph is similar to Fig. 1, and the main difference is the coupling between the temporal embedding discussed above and each graph structure, as presented in the right branch of Fig. 4. WLoG, we are considering $\mathcal{V}=V\times\mathcal{T}$ and $\mathcal{E}=E\times\mathcal{T}$ to simplify notations. ###### Definition 4. A dynamic node embedding is a mapping $\nu_{n}:V\times\mathcal{T}\to\mathbb{R}^{d}\times\mathbb{T}$. Therefore, each node at time $t\in\mathbb{T}$ is represented as a vector in a low dimensional space. Node embedding over time is useful for several applications such as time- dependent node classification, network clustering evolution, and link prediction. It also allows one to track node trajectories in the embedding space and extract information about node behavior and roles in the network. [55, 28]. ###### Definition 5. A dynamic edge embedding is a mapping $\nu_{e}:E\times\mathcal{T}\to\mathbb{R}^{d}\times\mathbb{T}$. Therefore, each edge at time $t\in\mathbb{T}$ is represented as a vector in a low dimensional space. In addition to edge embedding, a few methods map both edges and nodes, in particular for embedding dynamic knowledge graphs or user-item interaction graphs [41, 56]. ###### Definition 6. A dynamic substructure embedding is a mapping $\nu_{h}:S(\mathcal{G})\times\mathcal{T}\to\mathbb{R}^{d}\times\mathbb{T}$, where $S(\mathcal{G})$ is a set of induced subgraphs in $G$ at each time $t\in\mathcal{T}$. Therefore, each substructure at time $t\in\mathcal{T}$ is represented as a vector in a low dimensional space. Several dynamic graph embedding methods generalize traditional node or link prediction tasks to consider joint prediction over larger $k$-node induced subgraphs [57] and graphlets [58, 59]. ###### Definition 7. A snapshot embedding is a mapping $\nu_{\mathcal{G}}:\mathcal{G}\times\mathcal{T}\to\mathbb{R}^{d}\times\mathbb{T}$, where $\mathcal{G}=\\{G_{t}\\}_{t\in\mathcal{T}}$. Hence, each graph snapshot at time $t\in\mathcal{T}$ is represented as a vector in low dimensional space. Snapshot embeddings are useful for tracking network behavior over time, when the topological structure of the network is related to some emerging property or global interpretation of node interactions [49, 48]. ### 2.7 Dynamic Behaviors Embedding methods can also be identified according to the type of time dynamics they capture. Most methods can capture the evolution of connectivity between network nodes, i.e. addition and removal of edges. However, several works generalize the method to include adding and removing nodes in the network. In addition, there are methods capable of capturing other temporal properties of networks, including varying edge weights, changing node and edge classification, evolving node, and edge attributes, and dynamic processes in the network (such as diffusion cascades). We mapped three groups of dynamical behaviors on networks: topological evolution (nodes and edges varying over time), feature evolution (node and edge features changing over time), and processes on networks (a time-dependent process taking place on the network), as shown in Figure 6. Figure 6: Several dynamical properties that may be captured by embedding methods, divided by topological evolution (concerning changes in the node and edge set), feature evolution (related to node or edge label or other additional information changes over time) and processes on the network (regarding diffusion and role evolution of nodes in a network). #### 2.7.1 Topological Evolution Topological evolution means that nodes and edges may vary over time, being added/removed from the network. Node evolution is characterized by changes on the node set $V$, whereas the Edge evolution concerns connections between nodes that may be continuously formed or broken. Several dynamic graph embedding methods require that the number of nodes in a network does not change over time, since they handle adjacency or similarity matrices with fixed dimensionality [43]. Moreover, approaches considering edge creation may be captured concerning a trend through historical records of interaction between a pair of nodes (i.e. if two nodes have recently made multiple connections, they are more likely to make future connections), and ternary closure (if a third node is a common neighbor of two unconnected nodes, there is a greater tendency to close the triangle and form a clique) [57]. #### 2.7.2 Feature Evolution In many problems involving heterogeneous graphs, it is assumed nodes have fixed labels. However, it is also possible to observe a label evolution: In a citation network, an author may have as its main research area a different topic compared to previous years. Such changes are linked in some way to the topological evolution of the network. In node classification tasks, each node in a graph has a class label, hence it is possible to predict the class label for the nodes in a graph $G(t_{k})$ using previous graphs $G(t_{0}),...,G(t_{k-1})$ [31]. Even more, in several real-world networks, nodes and edges may have rich attributes (i.e. additional information) that are changing over time in addition to the network structure and the topology may influence attribute modification. Therefore, embedding methods may also capture information evolution [50, 56]. Furthermore, edge weights may also change over time in weighted networks, and their changes may be handled by embedding methods [60]. #### 2.7.3 Processes on Networks It is possible to analyze dynamic processes on the network, such as information diffusion or disease spreading, and other emergent properties, such as node roles changing over time. Two of the most basic and widely- studied diffusion models include [61]: (i) linear threshold model, where each node $v_{i}$ is influenced by each neighbor $v_{j}\in\mathcal{N}_{v_{i}}$ according to a sum of weights $b_{ij}$, and if this sum is higher than a random choice of a threshold $\theta_{v_{i}}$ at time $t_{k}$, the node $v_{j}$ is activated at time $t_{k+1}$; and (ii) independent cascade model, where an active node $v_{i}$ influences its neighborhood $\mathcal{N}_{v_{i}}$ with a probability parameter $p_{ij}$ (for each node $v_{j}\in\mathcal{N}_{v_{i}})$ independently of the network history. In this context, the problem of modeling diffusion by independent cascades comes down to learning probability distributions characterizing the hidden influence between users, to discover the main communication channels of the network. A group of nodes sharing similar roles in the network can be regarded as a set of nodes that are more structurally similar to nodes inside the set than outside, whereas communities are sets of nodes with more connections inside the set than outside. Hence the dynamic role evolution aims to automatically discover groups of nodes (representing common patterns of behavior) based on their latent features given by its representations [55]. ### 2.8 Stability and Temporal Smoothness A successful dynamic graph embedding algorithm should create stable embeddings over time. In other words, an embedding should be able to learn similar representations at consecutive timestamps if the underlying graph changes only slightly. More specifically, given the dynamic graph $\mathcal{G}$, if the graph snapshot $G(t_{k+1})$ is similar to $G(t_{k})$ (for instance, adjacency matrices $A(t_{k+1})$ and $A(t_{k})$), the embedding matrix $Z(t_{k+1})$ is expected to be similar to $Z(t_{k})$. Goyal et al. [10] propose the stability constant $K_{A}(\nu)$ as a metric to evaluate the stability of a dynamic graph embedding function $\nu$ in terms of the adjacency matrix $A$ over time. More specifically, the authors consider the stability of any embedding at a given timestamp as the ratio of the Frobenius norm of the difference between embedding matrices and the Frobenius norm of the difference between adjacency matrices, at consecutive timestamps. Then, the stability constant is the maximum difference between stabilities calculated along the entire life span $\mathcal{T}$ of the network, and the authors claim that a dynamic embedding $\nu$ is stable as long as the stability constant is small. It is possible to further extend these proposals to include any similarity matrix $S$, and to take the limit of two consecutive timestamps $t_{k+1}-t_{k}=\Delta t\to 0$ in order to analyze continuity aspects of embeddings (see future directions in Sec. 5). Although the above discussion is related to the global behavior of embedding, most of the proposed embedding methods seek local stability, i.e., for each node in the network. If the local topological structure around a node $v$ has undergone few changes, it is expected that the representations $z_{v}(t_{k+1})$ and $z_{v}(t_{k})$ are similar, assuming a temporal smoothness in the embedding space [31, 62, 63]. ## 3 Techniques for the Embedding of Dynamic Graphs In this section, we propose a taxonomy of dynamic graph embedding techniques, summarized in Figure 7. This taxonomy has been inspired by previous static graph embedding classifications [4, 3]. Classifications for dynamic graphs can be seen as extensions of static graph methods: (i) matrix factorization approaches; (ii) deep learning approaches; (iii) random walk-based methods (which may be understood as node sequence sampling methods); (iv) optimization based on edge reconstruction, which also leverages temporal smoothness, and (v) graph kernel methods. Here we include tensor factorization approaches, fusing them with matrix factorization approaches and defining general factorization based approaches, and introduce two novel paradigms: (i) temporal point process based methods, which handles similarity matrix changes as stochastic processes; and (ii) agnostic models, which learn embeddings over time independent of the approach used for each graph snapshot in dynamic networks. Earlier approaches to map dynamic networks into vector spaces proposed learning independent vector representations of each snapshot employing static graph embedding methods. However, since representation learning assumes that the probability mass of the data concentrates in manifolds that have much smaller dimensionality than the original space where the data lives, the evolution of the network may cause two relevant impacts on the embedding space: (i) the embedding vectors move on the manifold; and (ii) the manifold itself evolves in time [43]. Therefore, integrating spatial topology of network and the temporal network evolution into feature vectors encompassing these temporal correlations enhances the performance of prediction, classification and many other temporal network analysis problems [64]. Figure 7: Proposed taxonomy for dynamic graph embedding techniques, organized by algorithmic approaches. Next, we present each classification proposed by our taxonomy, discussing its insights, how it leverages temporal dependence in addition to topological structure, and describing several methods following each approach.Concrete examples of applications using the methods covered here are presented only in the following section. ### 3.1 Factorization-based methods Factorization-based approaches generate node embeddings over time by finding low-rank decompositions of time-dependent similarity measures. These similarity measures can be represented by: (i) sequence of matrices over time; or (ii) three-way tensors, where the first two dimensions are related to the similarity between nodes, and the third dimension is the temporal slice. The matrix representation relies on capturing the temporal evolution between representations in adjacent timestamps, and the embedding learns how the network changes over time, separating the topological contribution from the temporal contribution. The tensor representation, on the other hand, couples topology and temporal evolution in a structure to be factorized in a unified way. Consequently, it is valid to separate the factorization approaches into matrix factorization approaches and tensor factorization approaches. #### 3.1.1 Matrix factorization approaches Matrix factorization approaches express the evolutionary structure of networks in the form of matrices, therefore leveraging the time-dependent structural correlation among pairs of nodes. Dynamic graph embedding based on matrix factorization may be classified, as in static graphs, according to which type of loss function the approach minimizes. While most approaches factorize the similarity matrix and define an inner-product function between node embeddings to approximate the proximity measure [55, 60, 28, 32, 9], graph Laplacian Eigenmaps can be used to reconstruct time-dependent adjacency matrix from an eigendecomposition of Laplacian matrix [7]. The novelty in the embedding of dynamic networks consists in the way of propagating the factorization over time, maintaining the stability of the representations while inserting temporal dependence into matrix decomposition. It is possible to distinguish three paradigms based on the modeling for the connections between different timestamps: (i) adding temporal smoothing directly to the loss function, thus ensuring the stability of embeddings over time and focusing on node trajectories; (ii) updating the similarity matrix by using matrix perturbation theory, assuming that temporal evolution changes network topology slightly; and (iii) defining a temporal matrix factorization, decomposing the similarity matrix into a constant term and a time-dependent term. Figure 8 shows these two classifications. Figure 8: Classification of matrix factorization techniques for dynamic graph embedding, taking into account the type of cost function (similar to Cai et al. [3]) and how temporal dependence is leveraged into matrix decomposition. * • Jointly optimizing loss function and temporal smoothing: The methods based on the above insight perform the matrix factorization of each snapshot, in addition to bound the representations through a loss function term that is minimized when, for each node, the embeddings at two consecutive timestamps are similar. In other words, these approaches assume temporal smoothness in the embedding space, and the optimal embedding should jointly reconstruct the similarity matrix over time while being continuous. Given the loss function $\mathcal{L}_{0}(t)$ minimized by a matrix factorization approach on static graphs at each timestamp $t\in\mathcal{T}$, the jointly similarity reconstruction and temporal smoothing loss function for a dynamic graph can be defined as: $\mathcal{L}=\sum_{t\in\mathcal{T}}\mathcal{L}_{0}(t)+\tau\sum_{t\in\mathcal{T}}\sum_{t^{\prime}\in\mathcal{T}|(t^{\prime}-t)\leq\Delta t}\mathcal{L}_{\Delta t}(t^{\prime},t),$ (1) where $\Delta t$ is the time interval between consecutive timestamps in $\mathcal{T}$, $\mathcal{L}_{\Delta t}(t^{\prime},t)$ is a loss function for each pair of consecutive timestamps ($t$ and $t^{\prime}$) implying temporal smoothness. $\tau>0$ regulates the temporal smoothness term contribution. Some approaches following this paradigm include Ferreira et al. [28] and Zhu et al. [9]. One can implement two generalizations of the loss function: (i) the contribution of each timestamp may not be homogeneous, since the future in general depends more directly on the recent past than on the distant past, weighing the loss function $\mathcal{L}_{0}$ with a function $f(t)$ that is close to $1$ when $t\approx\tau$ (where $\tau$ is the last timestamp, i.e. $t_{N_{S}-1}$ for snapshot model), and close to $0$ if $t\ll\tau$ (as an example, $f(t)=e^{\tau-t}$); and (ii) the dataset may contain non-homogeneous time intervals, i.e., a function $g(\Delta t)$ may be defined in order to leverage different values of $\Delta t$, and the smoothness of embeddings at timestamps $t$ and $t+\Delta t$ may be less important when $\Delta t$ is large. * • Incremental updates on embeddings: In these methods, the initial graph snapshot $G_{0}$ gives the initial similarity matrix $S(0)$ using a standard matrix decomposition. Afterwards, the embeddings for subsequent timestamp are updated by assuming that $||S(\Delta t)-S(0)||<\epsilon$ for some small $\epsilon$, i.e. the similarity matrix $S(\Delta t)$ is a perturbation of the initial matrix $S(0)$. Therefore, it is possible to update a low dimensional representation of the nodes using first-order matrix perturbation theory in symmetric matrices iteratively [65]. Li et al. [7] apply matrix perturbation theory for Laplacian Eigenmaps, whereas Zhang et al. [32] propose TIMERS to update SVD decompositions incrementally. It is noteworthy that these approaches accumulate errors due to the perturbative approach made, and it may not be very effective if the network evolves intensely over time. Nevertheless, since for many real networks the temporal evolution is quite sparse (i.e. the number of added or removed edges is much lower than the total number of edges), these methods present promising results in dynamic link prediction, node clustering and node classification. Moreover, these strategies may reduce error accumulation over time, setting a restart time, i.e. a time interval at which the algorithm recalculates the factorization instead of the incremental update [32]. * • Temporal Matrix Factorization: In these methods, the time-dependent similarity matrix $S_{t}$ is decomposed by a temporal rank-$k$ matrix factorization model as follows [60]: $S(t)=h(U\times V(t)^{T}),$ (2) where both $U$ and $V(t)$ are $|V|\times k$ matrices, $U$ is a constant matrix, $V(t)$ is a time-dependent matrix and $h(\cdot)$ is an element-wise function. For undirected networks, $S(t)$ is symmetric and it is possible to (i) average $UV(t)^{T}$ and its transpose as the prediction of $S(t)$; or (ii) factorize $S(t)$ as the product of a time-dependent matrix $V(t)$ and its transpose [60]. Therefore, this method learns two types of embedding: (i) a constant term embedding, given by the rows of $U$ and represents persistent properties between pairs of nodes; and (ii) a time-varying embedding, given by the rows of $V(t)$ and represents changes in topology over time. A non- linearity may be inserted by the function $h$, e.g. a logistic function to interpret the reconstruction $U\times V(t)^{T}$ as a probability measure for the similarity. The main challenge to handle this approach is to describe the time-dependent matrices $V(t)$ for each timestamp. The dynamic behavioral mixed-membership model (DBMM) proposed by Rossi et al. [55] was the first to employ this factorization, and proposed: (i) a transition matrix $T$ in order to bound $V(t+\Delta t)\approx V(t)T$, (ii) a stacked transition model, which bounds training examples from $l$ previous timestamps, and (iii) a summary transition model, defining $V(t)$ at a specific timestamp as a linear combination of time-dependent matrices at previous timestamps. Yu et al. [60], who developed the formal description of the temporal matrix factorization approach described above, represented $V(t)$ as a polynomial function over time of order $p$, i.e. $V(t)=\sum_{i=0}^{p}W^{(i)}t^{i}$, where $\\{W^{(i)}\\}_{i=0}^{p}$ are $|V|\times k$ matrices which need to be learnt from the model along with $U$. The LIST model [64] leverages a temporal matrix factorization to learn the feature vector of each node by simultaneously optimizing the temporal smoothness constraint and network propagation constraint, ensuring that two vertices that are connected are likely to share similar features. Table 1 compares the approaches based on matrix factorization discussed in this section. Some combinations of the matrix factorization approaches were not yet explored, such as graph Laplacian eigenmaps including a temporal smoothing term and a Laplacian temporal matrix factorization. Table 1: Matrix factorization based Dynamic Graph Embedding. | Matrix Factorization based Dynamic Graph Embedding ---|--- | BCGD [9] | [28] | DANE [7] | TIMERS [32] | DBMM [55] | TMF [60] | LIST [64] Cost Function Type | Laplacian Eigenmaps | | | ✓ | | | | Similarity Matrix Factorization | ✓ | ✓ | | ✓ | ✓ | ✓ | ✓ Temporal Dependence | Temporal Smoothing | ✓ | ✓ | | | | | Incremental Updates | | | ✓ | ✓ | | | Temporal Matrix Factorization | | | | | ✓ | ✓ | ✓ #### 3.1.2 Tensor Factorization Tensors are higher-order generalizations of vectors and matrices, represented as $X\in\mathbb{R}^{I_{1}\times I_{2}\times...\times I_{N}}$, where the order of $X$ is $N>2$ [66]. Dynamic networks are usually expressed as three-way tensors, i.e. $N=3$. Several tensor factorization methods that can extract latent structure in the data have been proposed, including CANDECOMP/PARAFAC (CP) family, Tucker family, and alternative models [66]. For dynamic networks, CP decomposition is the most popular approach, since it learns both node embeddings and temporal embeddings with computational efficiency. Dunlavy et al. [5] used the temporal profiles computed by CP as a basis for predicting the scores in future timestamps, using a forecasting method for time-series data with periodic patterns. Rafailidis and Nanopoulos [67] represent continuous user-item interactions over time using a three-way tensor by proposing a measure of user-preference dynamics (UPD) that captures the rate at which the current preferences of each user have been shifted, and generate recommendations based on CP tensor factorization. Even though models based on Tucker decomposition achieve good performance, they require a lot of computational power, which may explain the reason every tensor factorization approach for dynamic graphs relies on CP decomposition [68]. The alternative models discussed by Acar et al. [66], including Multilinear Engine (ME), STATIS, and multiblock multiway models are also yet to be explored by dynamic graph embedding methods. ### 3.2 Approaches based on Deep Learning Deep Learning has shown a remarkable performance in a wide variety of research fields, including computer vision and language modeling, and it also benefits several applications related to representation learning and other tasks over graphs. Many successful static graph embedding methods have been proposed in the last few years, from Graph Neural Networks [69] and Convolutional Neural Networks on Graphs [70], to autoencoders, such as Structural Deep Network Embedding (SDNE) [71]. Different architectures based on neural networks are used both to extract the topological properties of a network and to capture temporal dependencies therein. Some preliminary works developed a fully-connected neural networks, either using them as an autoencoder or as a part of the decoding process [10, 27, 72, 50]. Architectures based on recurrent neural networks (RNNs), including long short-term memory units (LSTMs) [73] and gated recurrent units (GRUs) [74], leverage a sequence of graphs, or their representations, in order to learn or enhance embeddings taking into consideration temporal correlation, and store information over time to handle more complex correlations beyond consecutive timestamps [27, 75, 76]. Attention mechanisms [77] are used to further improve understanding of the most relevant time points for each representation [63, 78]. Convolutional neural networks (CNNs) [79] for graphs have been widely adopted in order to handle topological properties, including Graph Convolutional Networks (GCNs) [80]. Iterative propagation procedures have been also employed to learn the graph topology, as in Graph Neural Networks (GNNs) [69], GraphSAGE [81], and Gated Graph Neural Networks (GGNNs), the former also being able to learn the reachability across the nodes in a graph using GRUs [76]. These approaches have been also explored by several dynamic graph embedding methods [72, 82]. Even further, instead of using RNNs to explore the characteristics of the network over time, many succeeding models also employ convolutional networks (for instance, 1D CNNs) to leverage temporal dependencies [83, 53, 84, 85]. Neural networks may also learn an approximation of the network distribution by using generative models, such as the variational autoencoders (VAEs) [86] and generative adversarial networks (GANs) [87]. These approaches learn low- dimensional latent representations of the training data that store information about the type of output the model needs to generate, using a generative network to capture the data distribution and a recognition network (or a discriminator network) to estimate the probability that a sample came from the data distribution. Variational graph autoencoder (VGAE) [88] and Graph-GAN [89] employ these approaches to static graphs. These generative models are used in dynamic graphs to learn these data distributions over time [83, 31, 90, 91, 92, 85, 44]. In order to define a taxonomy that categorizes every method with minimal overlap between different categories, we classify the approaches according to the general architecture of the neural network. We have identified two main general architectures: (i) encoder-decoder perspective, which holds the majority of works concerning learning representations and decoding them for an application (i.e. network reconstruction, link prediction, or node/graph classification); and (ii) encoder, sampling, and decoder perspective, which contains the generative models and handles representations as probability distributions (see Figure 9 for the complete proposed taxonomy for deep learning based approaches). In the following, we further detail each of these methods and also present the most important existing works for each of them. Figure 9: Classification of deep learning techniques for dynamic graph embedding, dividing into encoder-decoder perspective and generative models. #### 3.2.1 Encoder-Decoder Architecture The encoder-decoder architecture consists of an encoder, which maps the data into a low-dimensional representation, and a decoder, that aims to either (i) reconstruct the original data; or (ii) solve an application-driven problem, such as a binary or multi-class classification problem. In the first case, the network is called an autoencoder, since the encoding seeks to be as lossless as possible. In the second case, the encoding is lossy and the original data cannot be fully recovered. Concerning dynamic graphs, an encoder receives graph snapshots as input and the decoder may exhibit three distinct outputs: (i) traditional autoencoders, whose representations reconstruct each graph snapshot, hence following the lossless encoding paradigm; (ii) dynamic autoencoders, whose representations do not reconstruct each graph snapshot whereas instead they reconstruct a snapshot in a future timestamp, therefore predicting the network structure; and (iii) discriminator networks, whose embeddings do not reconstruct the network topology at all, but are intended to learn node labels, node clusters or a global network property, and the loss functions are usually application- driven. * • Traditional Autoencoders: These architectures are applied for each graph snapshot, similar to the SDNE in static graphs [71]. DynGEM [10] builds a fully-connected autoencoder for each graph snapshot, using a transfer learning paradigm to share parameters between two consecutive autoencoders and a strategy that allows the autoencoder network to widen its layers and inserts new layers in order to handle a growing number of nodes in the graph. LDANE [50] follows a similar strategy, and handles node attributes by adding a margin-based ranking loss term in the loss function that ensures the embeddings of two similar nodes are closer than the embedding of two non- similar nodes. Models employing recurrent neural networks and their hidden states to encode dynamic graph structure have also been proposed. For instance, Taheri et al. [48] developed DyGGNN, which leverages Gated Graph Neural Networks (GGNNs) to capture graph topology and couples it with an LSTM encoder to handle graph dynamics, and with an LSTM decoder to reconstruct the structure of the dynamic graph at each timestamp. Other approaches include DySAT [63] and DGNN [93]. Table 2 summarizes the deep learning approach employed by each of these methods. Table 2: Traditional Autoencoders for Dynamic Graph Embedding. Algorithm | Deep Learning Model ---|--- DynGEM [10] | Fully-Connected Autoencoder (based on SDNE [71]) LDANE [50] | Similar to DynGEM, also handles node attributes (margin-based ranking loss term) DyGGNN [48] | Gated Graph Neural Networks and LSTMs DySat [63] | Structural and Temporal Self-Attention DGNN [93] | Attentive LSTMs * • Dynamic Autoencoders: The input of these methods based on dynamic autoencoders is the historical records of the network. The output is the reconstructed graph at a future time. Bonner et al. [83] regarded this approach as the temporal graph offset reconstruction problem, i.e. creating temporal graph embeddings that recreate a future timestamp of the graph. Several embedding methods developed a recurrent architecture to capture the dynamics of the network in order to predict its future state, such as Goyal et al. [27], which propose three different strategies for taking a set of graph snapshots, from autoencoders (dyngraph2vecAE) to LSTM networks (dyngraph2vecRNN), and the combination of both of them (dyngraph2vecAERNN). Chen et al. [75] propose an encoder-LSTM-decoder (E-LSTM-D), which resembles dyngraph2vecAERNN, but uses rectified linear units (ReLUs) as the activation function for each encoder/decoder layer, and adds a regularization term to prevent overfitting. AdaNN [94] employs a triple attention module, leveraging topology, node attributes and temporal attention to further feed them into two connected GRUs and concatenating them into a joint state vector. TRRN [95] adopts memories to enhance temporal capacity, applying multi-head self- attention and learning contextualized representations feeding different factors (including node features and topological features) and updated memories into LSTMs. Graph convolution combined with recurrent units have been exploited for building dynamic autoencoders. GC-LSTM [72] uses convolutions to extract topological features while coupling with an LSTM in order to learn temporal features of the dynamic network. EvolveGCN [47] proposes two versions that follow a similar approach: (i) EvolveGCN-H, where the GCN parameters are hidden states of GRUs that take node embeddings as input; and (ii) EvolveGCN-O, where the GCN parameters are input/output of an LSTM unit. T-GCN [84] uses GCNs to learn topological structures, then passing these features to GRUs in order to extract temporal dependencies. Table 3 summarizes the deep learning approach employed by each of the methods based on dynamic autoencoders. Table 3: Dynamic Autoencoders for Dynamic Graph Embedding. Algorithm | Deep Learning Model ---|--- TO-GAE [83] | GCNs over time dyngraph2vecAE [27] | Fully-Connected (FC) Autoencoders dyngraph2vecRNN [27] | Sparsely Connected LSTMs dyngraph2vecAERNN [27] | FC Encoder, LSTMs and FC Decoder E-LSTM-D [75] | FC Encoder, LSTMs and FC Decoder AdaNN [94] | Spatial, Attribute-Topology and Temporal Attention, and GRUs TRRN [95] | FC Encoder, Transformer-Style Self-Attention and LSTMs GC-LSTM [72] | Graph Convolutions and LSTMs EvolveGCN-H [47] | GCNs and GRUs EvolveGCN-O [47] | GCNs and LSTMs T-GCN [84] | GCNs and GRUs * • Discriminator Networks: This approach considers that a neural network must learn application-driven representations, such as properly classifying nodes or graphs/subgraphs over time [57], extracting network properties [82], or predicting a specific global feature [51]. Several approaches combine GCNs and recurrent networks, such as DynGraph2Seq [82], TSGNet [96] and NAAM [97]. Topological features may also be extracted by other techniques as an alternative to GCNs, as Xu et al. showed by implementing STAR [78]. Other approaches use convolutions for both spatial and temporal feature extraction, including Spatio-Temporal Graph Convolutional Network (STGCN) [51]. These discriminator networks are commonly employed to embed graphs constructed from non-relational data. For instance, DynamicGCN [53] extracts and learns graph representations from historical event documents, encoding the input data into a sequence of graphs with node embeddings, and developing a graph convolutional network model to predict the occurrence of certain type of events. TD-Graph LSTM [54], in the other hand, is applied to action-driven video object detection, passing each frame through a spatial convolutional network in order to detect similar regions in consecutive frames, and construct a temporal graph structure by connecting semantically similar regions. LSTM units take the spatial visual features as the input states, incorporating temporal motion patterns for participating objects in the action while minimizing an action-driven object categorization loss. Li et al. [52] propose a spatial-temporal graph embedding model called STG2Vec, which includes a temporal attention and incorporates multi-source information to fed a collaborative temporal modeling based on LSTMs. Table 4 lists the approaches described above and summarizes the deep learning approach employed by each of them, along with the specific task each method attempts to solve. Table 4: Discriminator Networks for Dynamic Graph Embedding. Algorithm | Deep Learning Model | Task ---|---|--- DynGraph2Seq [82] | GCNs, LSTMs and Hierarchical Attention | Sequence of Target Health Stages TSGNet [96] | GCNs and LSTMs | Node Classification NAAM [97] | GCNs, LSTMs (or BiLSTMs) and Temporal Attention | Forecasting User Interactions STAR [78] | Spatio-Temporal Attention and GRUs | Node Classification STGCN [51] | GCNs and Gated Convolutional Neural Networks | Traffic Forecasting DynamicGCN [53] | GCNs with Updates from Previous Timestamps | Event Prediction TD-Graph LSTM [54] | CNNs and LSTMs | Missing Label Classification STG2Vec [52] | Temporal Attention and LSTMs | Bike-sharing Demand #### 3.2.2 Generative Models Generative algorithms attempt to predict features given a certain label, i.e. to learn the data distribution patterns, in contrast to discriminative models, which attempt to learn representations and classify each input data. Therefore, these approaches have the power to synthesize data in addition to compress and to learn embeddings. Regarding generative neural networks for dynamic graph embedding, two groups of approaches arise: (i) methods based on variational autoencoders (VAEs), which encodes an input as a distribution over the latent space; and (ii) methods based on generative adversarial networks (GANs), which train both a generator network to synthesize graphs and a discriminator network to distinguish between true graphs and generated ones. * • Based on Variational Autoencoders: The encoder for a variational autoencoder takes a data point and produces a distribution, usually parameterized as a multivariate Gaussian. In this case, the encoder predicts the mean and standard deviation of the Gaussian distribution, and the lower-dimensional embedding is sampled from this distribution. The decoder is a variational approximation, which takes an embedding and produces an output [86]. Every variational autoencoder for dynamic graphs is inspired by VGAE [88] to address the encoding of each snapshot, differing on how to handle the graph evolution. These methods include (i) Dyn-VGAE [31], which addresses a temporal smoothness loss term, (ii) TO-GVAE [83], which applies VGAE over time to reconstruct subsequent snapshots, and (iii) VGRNN [92], which adopts VGAE whose prior distribution parameters are based on the hidden states in previous timestamps. There are several proposals to enhance the encoder model. For instance, Bonner et al. [90] propose a Temporal Neighbourhood Aggregation (TNA) block to comprise a GCN with a GRU in the encoder, controlling the combination of topological and temporal learning via a final linear layer. Zhao et al. [91] have developed a framework called BurstGraph, which splits the adjacency matrix of a graph into a standard adjacency matrix and a burst adjacency matrix to pick up unexpected behavior within a time duration. A variational autoencoder is employed using GraphSAGE [81] as the encoder, and two decoders: a standard decoder, which learns representations $Z^{v}$, and a bursty decoder, which learns sparse embeddings $Z^{b}$. * • Based on Generative Adversarial Networks: Generative adversarial networks (GANs) are algorithmic architectures that use two neural networks, pitting one against the other by designing a game-theoretical minimax game to combine generative and discriminative models. This approach has been applied for graph representation learning by a framework called GraphGAN [89]. While a generator network attemps to approximate the true graph connectivity distribution, the discriminator network aims to discriminate the connectivity of each node pair. Therefore, the generator network tries to deceive the discriminator network, whereas the discriminator network improves itself to distinguish better and better between true edges and generated edges. Inspired by these recent approach for graph representation learning, some methods arise for dynamic graph embedding. DynGraphGAN [85] designs the discriminator network with the following components: (i) a GCN to encode neighborhood features of nodes; and (ii) CNNs to learn temporal graph evolution along the time dimension. The generator network implements a sigmoid function of the inner product of two node’s embeddings at a timestamp $t$ to estimate the probability distribution of an edge connecting these nodes at time $t$. GCN-GAN [44] displays an architecture whose generative network consists of a GCN layer, an LSTM layer and a fully-connected output layer, and the discriminator network is a fully-connected feedforward neural network. Each of the GAN-based methods previously mentioned are trained with a single graph. Therefore, the trained model is capable of generating artificial snapshots following a similar structure and dynamic of the original graph used during training. Also, note that the resulting model is limited to creating snapshots with the same number of nodes as the graph used in training, given that number of parameters of the model is proportional to the number of nodes in the TVG. This dependency is observed, for example, for the last layer of the generative model of the GCN-GAN model, which builds an adjacency matrix with dimensions $N\times N$ from an embedding vector (outputted by the LSTM layer). Table 5 summarizes the generative model and architectures employed by the methods discussed in this section. Table 5: Generative Models for Dynamic Graph Embedding. Algorithm | VAE | GAN | Deep Learning Models ---|---|---|--- TO-GVAE [83] | ✓ | | GCNs Dyn-VGAE [31] | ✓ | | Original VAE with Temporal Smoothness TNA [90] | ✓ | | GCNs and GRUs [91] | ✓ | | GraphSAGE [81] and RNNs VGRNN [92] | ✓ | | GCNs and LSTMs DynGraphGAN [85] | | ✓ | GCNs and CNNs GCN-GAN [44] | | ✓ | GCNs and LSTMs ### 3.3 Random Walk Approaches Another class of methods for graph embedding functions relies on random walks. Multiple random walks of fixed length $L$ are considered sentences, generating a context for each node and trying to extract higher-order dependencies without adjacency matrices. The node sequence matrix is therefore generated and factorized, usually, by applying a neural network architecture, the most popular being the Skip-Gram [98, 99], to produce low dimensional vector representations for each node while maintaining their proximity in the new embedded space. Random walks applied to dynamic graphs must generate time-dependent contexts $C(t)$ in addition to sequences that capture topological dependencies. Then, the methods based on random walks are separated according to how they include the temporal aspect into the calculation: (i) random walk on snapshots, where a time-dependent node sequence matrix is generated by applying random walks starting on each node at each snapshot, and further optimizing a joint problem that takes into account the temporal dependency; (ii) evolving random walks, where the node sequences are generated for the initial time (first snapshot), then the method incrementally updates node representation by updating random walks starting on nodes affected by topological evolution; and (iii) temporal random walks, which define time-dependent context matrices by allowing random walks across consecutive timestamps and considering time ordering restriction. Also, other node sequence sampling methods besides random walks may be applied to generate contexts, including neighborhood aggregation. In the following, we further detail each of these methods. #### 3.3.1 Random Walks on Snapshots This approach performs random walks on each snapshot of a dynamic graph, obtaining vector representations by optimizing a joint problem taking into account temporal dependencies. It is important to note that methods following this approach generate contexts whose temporal connection between two matrices at consecutive time points is not modeled by random walks. Instead, the temporal dependency is defined later in the generation of the embeddings, taking into account the temporal smoothness. For instance, embeddings may be learned for each graph snapshot independently using methods including node2vec and DeepWalk, and afterward, the representations may be combined using operations, from simple vector concatenation to dynamic embeddings and orthogonal transformations to align embedding vectors at consecutive timestamps. De Winter et al. [100] and Dyn2Vec [29] apply vector concatenation to node representations over time. The former applies node2vec for each snapshot, whereas the latter employs a DeepWalk variant, whose probability of choosing a certain edge depends on the normalized edge weight. Chen et al. [25] initialize node embeddings by using a Gaussian prior with a diagonal covariance, and learn representations over time using dynamic Bernoulli embeddings, considering the rows of the node sequence matrix as the context for each node. tNodeEmbed [101] preserves static network neighborhoods of nodes in a $d$-dimensional feature space by using Orthogonal Procrustes, and optimizes a LSTM for specific tasks (i.e., link prediction and multi-label node classification). DynSEM [62] train node embeddings for each timestamp using node2vec, align node embeddings into a common space using Orthogonal Procrustes, and optimize a joint loss function taking into account temporal smoothness. Table 6 lists the methods described above and points out both static embedding method applied for each snapshot and how they handle temporal dependencies. Table 6: Random Walks on Snapshots. Algorithm | Static Embedding Method | Temporal Dependence Handling ---|---|--- (Static) [100] | node2vec | Vector Concatenation Dyn2Vec [29] | DeepWalk variant | Vector Concatenation [25] | Gaussian Initialization | Dynamic Bernoulli Embeddings tNodeEmbed [101] | node2vec | Orthogonal Procrustes and LSTM DynSEM [62] | node2vec | Orthogonal Procrustes and Temporal Smoothing Loss Function #### 3.3.2 Evolving Random Walks Generating random walks for every timestamp is an expensive time-consuming process. Several approaches first generate embeddings for the initial timestamp by using a static random walk approach, then incrementally update node representations taking into account that, in general, only a few nodes are influenced by topological evolution. Dynnode2vec [102] follows this approach by sampling node sequences for only evolving nodes instead of generating random walks for all nodes in a given timestamp and afterward feeding these sequences as an input to a dynamic Skip-Gram model [103], which is initialized at the first snapshot and used for weight initialization of the Skip-Gram of subsequent timestamps. Other approaches include EvoNRL [104], which initially employs node2vec, stores the random walks in memory, and updates the set of random walks when a single new edge arrives in the network. EvoNRL uses a similar dynamic Skip- Gram model previously mentioned [103]. Sajjad et al. [105] follow the same Skip-Gram implementation over time and proposes random walk update algorithms aiming to be statistically indistinguishable from a set of random walks generated from scratch on the new graph. NetWalk [106] also proposes a network embedding algorithm inspired by the Skip-Gram architecture (which the authors call Clique Embedding). It uses a deep autoencoder neural network to learn vector representations through a stream of random walks while minimizes the pairwise distance among all nodes in each walk. Evolving random walks are used in order to mitigate the computational cost of performing a full random walk in every snapshot. The former is not expected to be as precise as the latter, as shown in some approaches [105]. But the small loss in accuracy is compensated by the huge computational efficiency shown by these methods. Table 7 lists the methods based on evolving random walks and points out both static embedding methods applied for each snapshot and how they update random walks and vector representations. Table 7: Methods based on Evolving Random Walks. Algorithm | Static Embedding Method | Update Method ---|---|--- dynnode2vec [102] | node2vec | Dynamic Skip-Gram Model EvoNRL [104] | node2vec | Skip-Gram Model over Time [105] | DeepWalk with Unbiased Random Walk Updates | Skip-Gram Model over Time NetWalk [106] | Clique Embedding (AutoEncoder) | Vertex Reservoir and Walk Updating #### 3.3.3 Temporal Random Walk Methods The methods described in Sections 3.3.1 and 3.3.2 consider that random walks and their updates are made to each snapshot separately. However, one way to include the time dependency directly in a sequence of nodes generated by random walks is to build a method to create a corpus of walks over time, respecting the temporal flux. In the literature, these walks are regarded as temporal walks [34, 6]. It is possible to generalize the Skip-Gram architecture to handle continuous- time dynamic networks, as described by Nguyen et al. [34]. In particular, the authors propose a general framework called CTDNE for learning time-preserving embeddings and propose several methods to select the subsequent nodes from a starting node, thus performing a temporal random walk: (i) an unbiased temporal neighbor selection; (ii) a biased selection, which may be based on temporal exponentially-weighted decay (i.e., older timestamps have an exponentially lower contribution to the selection) or on temporal linearly- weighted decay. Several approaches follow CTDNE’s paradigm, including Wu et al. [45], which developed T-EDGE to encompass weighted networks, and De Winter et al. [100], proposing a continuous-time version of node2vec. STWalk2 proposed by Pandhre et al. [107], on the other hand, generates a spatial walk for each snapshot and a temporal walk, employing a Skip-Gram network to combine the two learned embeddings to get node representations. LSTM-node2vec [108] trains an LSTM autoencoder with node sequences generated by temporal random walks, and afterward initialize node2vec with the input layer weights of the trained LSTM encoder for each snapshot at time $t$. Diffusion prediction problems are related to temporal random walks, but the exact timestamp of diffusion is not necessarily known. instead only the temporal ordering is defined (i.e., typically one does not know exactly when some information have passed from a node to another, but the source and the target of the diffusion process is known). Models following this objective include (i) DeepCas [11], which uses GRUs and attention mechanisms to predict the future size of the cascade, (ii) DAN [109], which outputs the probability distribution of the next infected node leveraging feed-forward neural networks and attention mechanism, and (iii) Topo-LSTM [110], employing LSTMs to handle temporal dependence over diffusion. Moreover, Yang et al. [111] implement GRUs, GCNs and GraphSAGE to predict next affected nodes, and a reinforcement learning framework to predict cascade size. Temporal Random Walks represents a more natural way to deal with dynamic continuous graphs [34, 100], since it does not require any time discretization of the graph into snapshots. It is also the ideal approach for diffusion problems, as we have shown. Table 8 lists the temporal random walk methods and points out both static embedding method applied for each snapshot and how they update random walks and vector representations. Table 8: Temporal Random Walk Methods. Algorithm | Neural Network Model | Comments ---|---|--- CTDNE [34] | Skip-Gram model | Defined temporal random walk embedding methods T-EDGE [45] | Skip-Gram model | Encompasses weighted dynamic networks [100] (Dynamic) | node2vec | Continuous version of the first random walks on snapshots STWalk2 [107] | Skip-Gram model | Separates temporal random walks and spatial random Walks LSTM-node2vec [108] | node2vec and LSTMs | Handles both temporal sequences and static sequences DeepCas [11] | DeepWalk, GRUs and Attention Mechanism | Diffusion cascades DAN [109] | Feed-forward neural network and attention mechanism | Diffusion cascades [111] | GCNs and GraphSAGE | Diffusion cascades Topo-LSTM [110] | LSTMs | Diffusion cascades #### 3.3.4 Other Node Sequence Sampling Methods Some techniques exhibit steps or insights given by two different random walk based approaches. For instance, several models create a graph containing the nodes at a given time $t$ and their neighbors at the same timestamp $t$ and the previous timestamps in a defined time window, and employ random walk procedures leveraging temporal ordering [107, 112]. In particular, DHNE [46] gives exponential-decaying weights for edges connecting nodes and past neighbors. These approaches share elements from random walks over snapshots and temporal random walks. Furthermore, StreamWalk algorithm introduced by Beres et al. [113] compresses both evolving random walks and temporal random walks by updating the weight for walks to handle more recent edges. Although most node sequence sampling methods are based on random walks, some techniques rely on other ways to aggregate the neighborhood. Liu et al. [114] develop a spatial-temporal neural attention mechanism to estimate the co- occurrence matrix and guide the embedding algorithm to focus on the context information with higher importance. Dynamic Knowledge Graph Embedding (DKGE) [115], on the other hand, applies an attentive GCN (AGCN) to learn contextual subgraph embeddings over knowledge graphs, integrating them with knowledge embedding of entities and relations to build the joint representation of each object in the graph. The temporal evolution is leveraged by an online learning strategy that learns knowledge embeddings and contextual element embeddings of emerging entities and relations, as well as knowledge embeddings of existing entities and relations with changed contexts (i.e. whose induced subgraphs are changed). Torricelli et al. [35] introduce weg2vec, which takes a dynamic network and project it into a weighted link stream (the authors called weighted event graph), sampling neighborhoods for events (i.e. the edges of the original dynamic graph) from the link stream (i.e. to create a graph that connects edges concerning involved nodes, co-occurrence, and event time difference), and inputting sequences of connected events to a Skip-Gram model. Table 9 lists the methods described in this section and shows comments about their peculiarities, i.e., how they leverage temporal dependence or how they choose node context to apply the Skip-Gram model. Table 9: Other Node Sequence Sampling Methods. Algorithm | Comments ---|--- DHNE [46] | Historical-current graphs STWalk1 [107] | Similar to historical-current graphs DyAne [112] | Supra-adjacency representation StreamWalk [113] | Temporal random walks updated for affected nodes DKGE [115] | Attentive GCNs over subgraphs, and online learning [114] | Co-ocurrence matrix using spatial-temporal neural attention model weg2vec [35] | Weighted event graphs ### 3.4 Edge Reconstruction based Optimization with Temporal Smoothing Following the taxonomic approach proposed by Cai et al. [3] for static graphs, some methods for dynamic graphs have been identified whose approach is similar to techniques that directly optimize an objective function based on edge reconstruction. In addition to either maximizing edge reconstruction probability or minimizing edge reconstruction loss, these approaches also preserve temporal smoothness. It is noteworthy that these methods may be understood as reconstructing temporal edges between a node $v$ at a given time $t_{i}$ and the same node at the subsequent timestamp, therefore justifying this category even more adequately for embedding methods. DynamicTriad [26] is a representative method of this category, aiming to preserve both structural information and evolution patterns of a network by modeling how a closed triad (i.e. three vertices connected) develops from an open triad (i.e. three vertices where two of them are not connected). The authors define the probability that an open triad $(v,u,w)$ (where $v$ and $u$ are not connected) evolves into a closed triad, and the probability of the edge $(v,u)$ will not be created, joining these probabilities into a distance- based loss function. Moreover, the model supposes that highly connected nodes should be embedded closely in the low-dimensional vector space and imposes this condition by a margin-based rank loss function, and finally considers temporal smoothness at consecutive time stamps. Other approaches, solely based on a distance loss function include DNE [30], and Liu et al. [36]. Time-Aware KB Embedding [116] learns node embeddings by modeling relationships as translation operators in the low-dimensional vector space [117] and optimizes a joint margin-based ranking loss function concerning both temporal order score function (the temporal encoding) and translation embeddings (the topological encoding). Table 10 lists the methods described above, and the loss function each technique aims to minimize, pointing how they handle temporal dependence. Table 10: Edge Reconstruction based Optimization. Algorithm | Temporal Dependence Handling | Loss Function ---|---|--- DNE [30] | Delta of Theoretical Optimal Solution [118] and Temporal Smoothness | Based on LINE [119] DynamicTriad [26] | Temporal Smoothness | Triadic Closure and Social Homophily [36] | Temporal Smoothness | Based on Laplacian Eigenmaps Time-Aware KB Embedding [116] | Temporal Order Score Function | Joint Margin-Based Rank Note that the temporal smoothing given by a distance-based loss is similar to both matrix factorization problems and skip-gram based models. Indeed, there is a general view that demonstrates the relationship between network embedding approaches, matrix factorization, and Skip-Gram models [120]. Even further, Liu et al. [120] provide a fundamental connection from an optimization perspective, which is the fundamental idea of edge reconstruction based methods. In this survey, these approaches are separated in the taxonomy to follow more strictly algorithmic properties rather than theoretical aspects of loss functions. ### 3.5 Methods based on Graph Kernel As presented by Cai et al. [3] for static graphs, a few methods handle elementary substructures that are decomposed from a whole graph structure. They incorporate topological attributes built in the network processing step, including graphlet transitions count [58], graphlet frequencies over time [59] and adjacency matrix summation [121], to learn representations capable of reconstructing such elaborate attributes using a shallow approach of an autoencoder. Hence, since these substructures are used as a topological building block of a static network, dynamic graph embedding takes into account the transitions between different elementary structures. In addition, Béres et al. [113] developed an online second-order similarity (SecondOrder) that learns neighborhood similarity by Min-Hash fingerprinting, modifying the embedding vector whenever a neighbor of $v_{i}$ gets more similar to $v_{j}$ after adding the edge $(v_{i},v_{j})$ into the network, which may be regarded as a graph kernel based approach. ### 3.6 Methods based on Temporal Point Process Several dynamic graph embedding techniques, consider that interactions between nodes are stochastic processes whose probabilities depend on the topological structure of the network, on node features, and the network history. For these methods, it is assumed that an event influences a given node and, consequently, it can interact with other nodes in the network, if they are susceptible to the influence of the current node. Therefore, there is a probability that the event will propagate based on the mathematical definitions presented in Section 2.4, such as the conditional intensity function. Main methods in this category include DyRep [8], handling a continuous-time deep model of a temporal point process using a conditional intensity function modeling the occurrence of an event $p$ with time scale $k$ between nodes $v_{i}$ and $v_{j}$, and DeepCoEvolve [24], modeling the user-item interaction as a multidimensional temporal point process. Other approaches include KnowEvolve [38], modeling a fact in a knowledge graph as a temporal point process, M2DNE [122], capturing edge evolution by a temporal point process with an attentive mechanism, in addition to a general dynamics equation concerning the linking rate, and HTNE [39] with an attention mechanism for neighborhood formation sequence of a node as a counting process. Furthermore, Knyazev et al. [40] extend DyRep [8], replacing the original encoder with a procedure that, given an event between nodes $v_{i}$ and $v_{j}$: (i) calculates representations of all nodes at time $t_{k-1}$; (ii) returns an edge embedding for all pair of nodes; (iii) updates the embedding of node $v_{j}$ based on all edges connected to it; and (iv) updates the edge embedding between nodes $v_{i}$ and $v_{j}$. MHDNE [123] models the edge formation process as two temporal sequences with historical edge information and network evolution information therein, respectively. In particular, the network evolution is based on open triangles and triadic closure problem [26], and the intensity function for a new edge creation at time $t$ is given by a Hawkes process, leveraging a term dependent on node embeddings, a time decay function on an exponential form, and the distance between node’s neighborhood. Wu et al. [42] propose a Graph Biased Temporal Point Process (GBTPP), which aims to compute the probability of an event propagating to nodes $v_{j}\in\mathcal{N}(v_{i})$ in the future timestamp $t_{k+1}$ given event propagation history and node $v_{i}$, which is influenced by the event at time $t_{k}$. ### 3.7 Agnostic Models The models described so far use some algorithmic paradigm as a basis for their development. A few approaches in the literature are independent of how the vector representations in each timestamp are obtained. They commit to learning the connections between representations at consecutive time points, or even within a time window. Because of this property of independence of the algorithmic procedure, we classify them as Agnostic Models. Two different paradigms may be followed: (i) the retrofitted model, where vector representations are learned for the initial graph by using any of the state- of-the-art static network embeddings, then the dynamics are captured by retrofitting the initial embedding with the subsequent graph snapshots; and (ii) the embedding space transformation method, where the representations of each graph snapshot are calculated by any static method separately, then a transformation function that connects an embedding at time $t$ to the embedding at the next timestamp is learned. In the following, we detail these two paradigms. #### 3.7.1 Retrofitted Model The retrofitted model [43] is based on the local temporal smoothness assumption, considering the vertex-centric evolution of the network. Therefore, for the first timestamp, the model employs an existing static embedding method to learn vector representations, but for subsequent timestamps, the vectors are updated by a local update method. In retrofitting, embeddings at previous timestamp $z_{i}(t-1)$ are revised by using the embedding of its neighborhood available from the graph snapshot at time $t$, hence the resulting vector $z_{i}(t)$ is similar to both the prior vector $z_{i}(t-1)$ and the vectors of its adjacent nodes in timestamp $t$. This update rule is carried out until convergence for each time step. It is noteworthy that, except for the first timestamp, the retrofitted model does not learn embeddings directly from data, instead it presumes a temporal smoothing criteria to guarantee node representations over time. #### 3.7.2 Embedding Space Transformation Model The second agnostic approach assumes that the network time evolution is a global process attempting to fulfill the global temporal smoothness objective by considering the temporal evolution of a network as a transformation over the node embedding vectors of successive timestamps. Once the transformation operator is learned, it can map the latent representation from a known snapshot to the next unobserved snapshot. Saha et al. [43] introduce two paradigms for embedding space evolution: (i) homogeneous transformation, where the transformation is assumed to be the same across any two successive timestamps; and (ii) heterogeneous transformation, which refrains from the uniformity assumption i.e. every pair of timestamps has a different transformation procedure. These two paradigms are further discussed in the following: * • Homogeneous Transformation: This transformation is shared across timestamps, and Saha et al. [43] have proposed a linear transformation in order to learn these mappings between embedding spaces. * • Heterogeneous Transformation: Methods based on heterogeneous transformation learn a different projection matrix for each pair of network snapshots. Saha et al. [43] employ a linear heterogeneous transformation model, learning $N_{S}-1$ different transformation matrices and obtaining a final transformation matrix by combining these different matrices. ### 3.8 Other Dynamic Graph Embedding Approaches At last, some methods do not fit into any of the discussed categories, either by presenting some specific methodology and different from all approaches, or by combining different techniques without having one in particular as the main one. It is also interesting to note that some methods aggregate temporal information of the dynamic network into a static graph (i.e. a network containing all interactions and vertices that were present from the beginning of a network until the final timestamp of analysis) [5, 33]. Therefore, these works use static embedding methods over a network that encompasses temporal information stored at its edges. ## 4 Dynamic Graph Embedding Applications In this section, we provide an overview of different network applications that are typically improved by the embedding methods for dynamic graphs presented so far. Applications of dynamic graph embeddings for network mining can be divided into which elements of the network they are oriented to or focused on: (i) node related, including node classification, recommendation systems, and trajectory analysis; (ii) edge related, including link prediction and event time prediction; and (iii) graph related, including graph classification over time, network visualization and diffusion prediction. The complete list of tasks discussed in this survey is shown in Figure 10. Figure 10: Every embedding application for dynamic networks covered in this survey, organized by which elements of the network they are oriented to or focused on. ### 4.1 Node-Related Applications Node embeddings are used for various purposes on network analysis, with applications already known in static graphs, but handled over time, including node classification, node clustering, and recommendation systems, up to novel applications, specific within the dynamic scenario, such as the node feature prediction and trajectory tracking. * • Node Classification: The node classification problem focuses on the assignment of a class label to each node in a graph based on the rules learned from the labelled nodes. In a dynamic network, it is possible to (i) classify nodes whose labels are unknown in a given timestamp $t$, considering behavior and labels of the other nodes in the network; or (ii) to predict the classification of a node in the future, given that node labels can vary over time [124]. Approaches concerning node classification apply a classifier on labeled node embeddings for training, including: (i) linear layer [54]; (ii) SVM [50, 48, 46, 125, 123, 58, 121]; (iii) logistic regression [7, 31, 39, 122, 100, 31, 108, 114, 105, 112, 30, 36, 125]; (iv) softmax [47, 96, 93, 101, 78, 10]; and (v) random forests and gradient boosting techniques [100, 58, 121]. * • Node Clustering: While the classification task is supervised, clustering similar nodes is an unsupervised task, aiming to group similar vertices when information about labels is unavailable. An important challenge in this task is to ensure that the embeddings of similar nodes are close to each other in the vector space while being able to capture possible node transitions between different clusters over time, as in DynGem [10] and dyngraph2vec variations [27]. Clusters in the embedding space can also represent different behavior patterns of nodes over time, which is defined by Rossi et al. [55] as an analysis of the role evolution of each node in the network. * • Recommender Systems: A dynamic network consisting of users, items, and timestamped interactions between users and items may be explored by embedding methods to recommend items to users according to their interests over time. As discussed by Kazemi et al. a[12], recommendations may suggest items not exactly similar to user’s interest in order to attract the user to a novel interest, and recommendations may arouse a future desire for items of certain type even if the user does not display any immediate interest. * • Node Attribute Prediction: It is important to predict time-varying attributes of network nodes. Formally, given a time-varying graph $\mathcal{G}=\\{G(t_{0}),...,G(t_{N_{S}-1})\\}$ with additional node attributes $X(t)\in\mathbb{R}^{f}$ (where $f$ is the number of these attributes) as the training data, this task aims to estimate the real-valued variable $X(t_{N_{S}})\in\mathbb{R}^{f}$ at time $t_{N_{S}}$ for each node in the graph [124]. This problem is also known as relational time series regression, and node embeddings over time can serve as input to time series prediction models, such as ARIMA or recurrent neural networks, to enhance prediction by leveraging topological information along with attribute evolution. * • Trajectory Tracking: From the representations obtained by embedding over time, it is possible to make an analysis of the trajectories of each entity. Ferreira et al. [28] capture ideological changes of two diverse party systems (Brazil and the United States), as expressed by members’ voting behavior, by mapping the network into a temporal latent ideological space. Therefore, the authors have tracked individual members overtime in the low-dimensional vector space, analyzing how vector representations of individual members change and then measure ideological shifts over time. Although this task shares similarities with the node clustering problem, the focus of trajectory tracking is not on the groups themselves, but on the transitions between them. ### 4.2 Edge-Related Applications Edge-related tasks comprise the most commonly explored problem by the techniques presented in this survey: dynamic link prediction. However, a novel application, when compared with static methods, appears in the literature: event time prediction, whose focus is on the detection of the time instant when a new edge should appear. Finally, the edge classification may also be treated on heterogeneous dynamic graphs where edges have labels. * • Dynamic Link Prediction: Link prediction in dynamic networks is more complex than in static networks, since it comprises two different tasks: (i) temporal link prediction, (the prediction of new edges) [126], where, given a sequence of snapshots of an evolving network $\mathcal{G}=\\{G(t_{0}),...,G(t_{N_{S}-1})\\}$, aims to predict the links in $G(t_{N_{S}})$ (where $N_{S}$ is the total number of snapshots), i.e. to construct a function $f(v,u)$ that predicts whether an edge $(v,u)$ exists between nodes $v$ and $u$ at time $t_{N_{S}}$ [43]; and (ii) link completion (prediction of previously observed edges) [126], which consists of finding the missing links along the evolving network. Most approaches consider link prediction as a classification task, where labels are (i) existence; or (ii) non-existence of an edge. Junuthula et al. [126] provides a deeper discussion regarding evaluation of link prediction on dynamic networks, and Yu et al generalizes temporal link prediction to include: (i) prediction of link weights, considering henceforth the aforementioned definition as a particular case when the link weight is restricted to 0 or 1, and (ii) prediction at timestamp $N(S)+\alpha$, where $\alpha\geq 0$, and the classical definition above regards the special case $\alpha=0$. * • Event Time Prediction: In dynamic networks it is valid to question at which timestamp a given interaction can occur, configuring the event time prediction task. Methods based on temporal point process have a mathematical formulation to predict the next time point $t_{k}$ for an event given a pair of nodes $v$ and $u$ and network history (i.e. a list of interactions over time), including DyRep [38] and DeepCoevolve [24]. * • Edge Classification: When the edges of a network can belong to certain classes, the problem of classifying the edges shares similarities with the node classification task. For instance, interactions between nodes may be associated with a trustworthy rating (i.e. users who trust others, or not), or sentiment analysis (i.e. a post on a social network). Therefore, predicting the label of an edge $(v_{i},v_{j})$ over time may be done concatenating node embeddings or operating over them to obtain edge embeddings, and applying a classifier afterward [47]. ### 4.3 Node- and Edge-Related Tasks Some tasks can be applied to both nodes and edges in a graph, depending on how the problem is formulated. In this context, two tasks have been identified so far: (i) anomaly detection, which can be related to a node with anomalous behavior or to an unwanted or unexpected edge; and (ii) diffusion prediction, which can either identify which nodes will be affected by a diffusion process or detect edges most likely to diffuse information. * • Anomaly Detection: Anomaly detection is an important application for detecting malicious activity in networks. These anomalies can be detected by unexpected changes in the vector representation of nodes and edges, suggesting that some non-expected activity is arising at some timestamp. Goyal et al. [10] propose the definition of $\delta_{t}=\left\|Z(t_{k+1})-Z(t_{k})\right\|_{F}$ as the change in embedding between time $t_{k}$ and $t_{k+1}$, and a threshold to consider the node behavior as anomalous when its embeddings change above this value. This threshold value can be calibrated diferently to each specific problem, and can be tuned to identify certain types of anomalies based on node behavior changes, where each type of behavior is encoded differently. A similar approach is presented by Rossi et al. [55], which defines several groups of behaviors for the nodes of the network based on node embeddings (i.e. similar to node clustering), hence the authors claim to detect anomalous behaviors from abrupt node transitions between different clusters. Khoshraftar et al. [108] formulate the anomaly detection as a classification task, where edges may belong to anomalous or normal class labels, therefore using node embeddings to compute edge representations and then using their method (LSTM- Node2vec) to classify the edges. * • Diffusion Prediction: Diffusion problems solved by embeddings methods may be categorized as (i) a sequence prediction problem (i.e. a microscopic diffusion problem), willing to predict the future affected node given the previously affected ones [109, 111, 110, 127, 128]; or (ii) a regression problem, which predicts the future numerical properties of the network (e.g. the total number of infected nodes in a macroscopic diffusion problem) [11, 111]. ### 4.4 Graph-Related Applications Problems related to the whole graph usually analyze the network globally, therefore dealing with tasks that are not centered on vertices or edges. The main examples verified in the literature are: graph classification, network visualization, and graph reconstruction. * • Graph Classification: Classifying the whole graph over time into one class from a set of predefined categories $\mathcal{L}^{G}$ is a relevant problem when the topological structure and possible attributes of each node in a network configure a global classifiable behavior [49, 48]. By obtaining a whole graph embedding over time (either by aggregating node embeddings, or using graph kernels, as in Section 3.5), it is possible to use the same classifiers presented in the node classification task to either perform classifications at known timestamps or to predict the classification of the graph in the future. * • Network Visualization: It is also possible to visualize dynamic graphs in 2D or 3D space by applying dimensionality reduction techniques that preserve the embedding structure, such as t-SNE [129], to node embeddings. Goyal et al. [10] state that, to avoid visualization instability (i.e. embedding instability over time), t-SNE needs to be initialized with an identical random state for all timestamps. * • Graph Reconstruction: The learned vector representations may reconstruct the dynamic graph through operations in the vector space that decode similarity information between pairs of nodes, such as a dot product or pairwise distance to estimate the adjacency matrix or the weight matrix. Goyal et al. [10] propose a methodology to rank pair of nodes according to their corresponding reconstructed similarity, then define the reconstruction precision as the ratio of real links in the top $k$ pair of nodes. ## 5 Conclusion and Future Directions In this survey, we conducted a comprehensive review of the literature in embedding methods for dynamic graphs. We defined the problem of embeddings for dynamic graphs inspired by previous surveys on static graph embeddings, whereas introducing some important concepts to consider scenarios for dynamic graphs. We proposed a taxonomy to classify the problem settings for dynamic graph embedding problems, broadening the design presented in the static scenario by introducing fundamental time aspects in embeddings, such as the different dynamic graph models that can be embedded, in addition to embedding outputs that aggregate temporal information or track representation trajectories. We also proposed a taxonomy for the different embedding paradigms for dynamic graphs, classifying them according to the methodology they use, topological-temporal properties that preserve, and assumptions made for the method to be valid. After that, we summarized the applications that the embedding of dynamic graphs enables. * • Development and Expansion of Libraries and Frameworks for Dynamic Graph Embedding: With the increasing number of dynamic graph embedding techniques, it is interesting to invest in developing and expanding a framework capable of unifying the different algorithms, applications, and standard benchmark datasets. Goyal et al. [130] propose DynamicGEM, an open-source Python library consisting of state-of-the-art algorithms for dynamic graph embeddings, focusing on node embeddings. The library contains the evaluation framework for graph reconstruction, static and temporal link prediction, node classification, and temporal visualization, with various metrics implemented to evaluate the state-of-the-art methods, and examples of evolving networks from various domains. Since DynamicGEM provides a template to add new algorithms, it would be interesting to invest in further developing the package so as to incrementally insert new techniques. In addition, expanding the framework to include embeddings of other types (such as edges, hybrids, and whole graphs), as well as methods for continuous-time dynamic graphs. Building a reference package of embedding methods for dynamic graphs (be it DynamicGEM or other) would benefit the community interested in this topic. * • Mathematical Analysis and Different Stability Metrics: To define useful and accurate stability metrics is a future research field in the area, with a more theoretical focus. Goyal et al. [10] suggest a metric considering the adjacency matrix and snapshot models. Nevertheless, it is necessary to explore these metrics in more detail, by testing them on real-world networks and checking the behavior of embeddings for different methods and problems. In addition, a more sophisticated mathematical analysis may promote a research field direction to improve understanding of the relationship between representations and the evolution of dynamic graphs. * • Temporal Multiscale Evolution Embedding: In real networks, the phenomena of temporal evolution may be associated with different scales (e.g. daily, weekly, monthly, and yearly phenomena). It would be interesting to investigate embedding techniques capable of efficiently capturing these peculiarities in a methodology that deals with temporal multiscale evolution. Trivedi et al. [8] make an important step forward in this direction by considering two different timescales: for dynamics on the network and dynamics of the network. * • Dynamic Hypergraph Embedding: The existing models for dynamic graphs only consider edges connecting two nodes in the graph, therefore not being able to expand to hypergraphs, where sets of nodes form connections without necessarily having binary ordered relations. Although there are a few works in hypergraph embedding [131] and even fewer that are extended to dynamic hypergraphs [132], a promising area of research is to develop new methods for dynamic hypergraphs, as well as extending some of the existing dynamic graph embedding methods, allowing them to handle hyperedges. * • Capturing Latencies and Spatial-Temporal Edge Patterns: Although the methods described in the survey capture dynamic behaviors such as topological evolution, feature evolution and processes on the network, more sophisticated temporal models take into account that nodes and edges are not created or removed instantaneously in the network [6]. Thus, latency is an important feature of dynamic networks, as it carries information about the affinity between nodes or a node within the network, and no method described above has a methodology for dealing with such factors. In addition, spatio-temporal edges make embedding of dynamic graphs more complex than extracting information from snapshots, or from timestamped edges, as temporal correlations are more complex between non-consecutive timestamps [22]. * • Generalization of Graph Embedding to Higher-Order Dimensional Networks: With the diversity of models for dynamic graphs, it is noted that there is no consensus to define a more general model and, consequently, an embedding that can be generalized to as many dynamic graphs as possible. Keeping this in mind, there are graph generalization models, including MAGs [22] and Stream Graphs [23], and a future direction of research may be the application of embedding methods in such models. Even more, it is interesting to suggest extending embedding for higher-order graphs, allowing not only the capture of temporal and topological properties but also of multilayer structures at a different time and connectivity scales. * • Generation of Property-Preserving Network Evolution in Embedding Space: Several complex network properties, such as pathways, degree distribution, and scale invariance, may change over time, and finding out which patterns of temporal evolution conserve or change these properties is challenging. Cheng et al. [133] propose a structure-preserving model reduction procedure developed for linear network systems, whereas Rossi et al. [55] propose a matrix factorization to discover roles of certain vertices in a network and study possible changes in these roles over time. One possible direction for future research is to use representations in low-dimensional spaces to study and generate evolutions in a network that are capable of preserving certain properties of interest. One possible way to pursue this idea is to explore generative models, such as variational autoencoders and GANs, and use learned distributions of input data to generate new networks that are similar to training, and thereby capture patterns associated with their characteristics. ## Acknowledgment This work has been partially supported by CAPES, CNPq, FAPEMIG, and FAPERJ. Moreover, this paper is dedicated to the memory of our dear co-worker Artur Ziviani, who passed away while this paper was being peer-reviewed. Artur was a brilliant researcher and dedicated advisor. ## References * [1] Albert-László Barabási et al. Network Science. Cambridge University Press, 2016. * [2] William L Hamilton, Rex Ying, and Jure Leskovec. Representation Learning on Graphs: Methods and Applications. IEEE Data Engineering Bulletin, 2017. * [3] Hongyun Cai, Vincent W Zheng, and Kevin Chen-Chuan Chang. A Comprehensive Survey of Graph Embedding: Problems, Techniques, and Applications. IEEE Trans Knowl Data Eng, 30(9):1616–1637, 2018. * [4] Palash Goyal and Emilio Ferrara. Graph Embedding Techniques, Applications, and Performance: A Survey. Knowledge-Based Systems, 151:78–94, 2018. * [5] Daniel M Dunlavy, Tamara G Kolda, and Evrim Acar. Temporal Link Prediction using Matrix and Tensor Factorizations. ACM Transactions on Knowledge Discovery from Data (TKDD), 5(2):10, 2011. * [6] Arnaud Casteigts, Paola Flocchini, Walter Quattrociocchi, and Nicola Santoro. Time-Varying Graphs and Dynamic Networks. International Journal of Parallel, Emergent and Distributed Systems, 27(5):387–408, 2012. * [7] Jundong Li, Harsh Dani, Xia Hu, Jiliang Tang, Yi Chang, and Huan Liu. Attributed Network Embedding for Learning in a Dynamic Environment. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, pages 387–396. ACM, 2017. * [8] Rakshit Trivedi, Mehrdad Farajtabar, Prasenjeet Biswal, and Hongyuan Zha. DyRep: Learning Representations over Dynamic Graphs. In International Conference on Learning Representations, 2019. * [9] Linhong Zhu, Dong Guo, Junming Yin, Greg Ver Steeg, and Aram Galstyan. Scalable Temporal Latent Space Inference for Link Prediction in Dynamic Social Networks. IEEE Transactions on Knowledge and Data Engineering, 28(10):2765–2777, 2016. * [10] Palash Goyal, Nitin Kamra, Xinran He, and Yan Liu. DynGEM: Deep Embedding Method for Dynamic Graphs. arXiv Preprint arXiv:1805.11273, 2018. * [11] Cheng Li, Jiaqi Ma, Xiaoxiao Guo, and Qiaozhu Mei. DeepCas: An End-to-end Predictor of Information Cascades. In Proceedings of the 26th International Conference on World Wide Web, pages 577–586. International World Wide Web Conferences Steering Committee, 2017. * [12] Seyed Mehran Kazemi, Rishab Goel, Kshitij Jain, Ivan Kobyzev, Akshay Sethi, Peter Forsyth, and Pascal Poupart. Representation Learning for Dynamic Graphs: A Survey. Journal of Machine Learning Research, 21(70):1–73, 2020. * [13] Yu Xie, Chunyi Li, Bin Yu, Chen Zhang, and Zhouhua Tang. A Survey on Dynamic Network Embedding. arXiv preprint arXiv:2006.08093, 2020. * [14] Joakim Skarding, Bogdan Gabrys, and Katarzyna Musial. Foundations and Modelling of Dynamic Networks using Dynamic Graph Neural Networks: A Survey. IEEE Access, 2021. * [15] Leo Katz. A New Status Index Derived from Sociometric Analysis. Psychometrika, 18(1):39–43, 1953. * [16] Lada A Adamic and Eytan Adar. Friends and Neighbors on the Web. Social networks, 25(3):211–230, 2003. * [17] Peng Cui, Xiao Wang, Jian Pei, and Wenwu Zhu. A Survey on Network Embedding. IEEE Transactions on Knowledge and Data Engineering, 2018. * [18] Daokun Zhang, Jie Yin, Xingquan Zhu, and Chengqi Zhang. Network Representation Learning: A Survey. IEEE Transactions on Big Data, 2018. * [19] Quan Wang, Zhendong Mao, Bin Wang, and Li Guo. Knowledge Graph Embedding: A Survey of Approaches and Applications. IEEE Transactions on Knowledge and Data Engineering, 29(12):2724–2743, 2017. * [20] Maximilian Nickel, Kevin Murphy, Volker Tresp, and Evgeniy Gabrilovich. A Review of Relational Machine Learning for Knowledge Graphs. Proceedings of the IEEE, 104(1):11–33, 2015. * [21] Daniele Grattarola and Cesare Alippi. Graph neural networks in tensorflow and keras with spektral. arXiv preprint arXiv:2006.12138, 2020. * [22] Klaus Wehmuth, Artur Ziviani, and Eric Fleury. A Unifying Model for Representing Time-Varying Graphs. In 2015 IEEE International Conference on Data Science and Advanced Analytics (DSAA), pages 1–10. IEEE, 2015. * [23] Matthieu Latapy, Tiphaine Viard, and Clémence Magnien. Stream graphs and link streams for the modeling of interactions over time. Social Network Analysis and Mining (SNAM), 8(61), December 2018\. * [24] Hanjun Dai, Yichen Wang, Rakshit Trivedi, and Le Song. Recurrent coevolutionary latent feature processes for continuous-time recommendation. In Proceedings of the 1st Workshop on Deep Learning for Recommender Systems, pages 29–34, 2016. * [25] Chuanchang Chen, Yubo Tao, and Hai Lin. Dynamic Network Embeddings for Network Evolution Analysis. arXiv preprint arXiv:1906.09860, 2019. * [26] Lekui Zhou, Yang Yang, Xiang Ren, Fei Wu, and Yueting Zhuang. Dynamic Network Embedding by Modeling Triadic Closure Process. In Thirty-Second AAAI Conference on Artificial Intelligence, 2018\. * [27] Palash Goyal, Sujit Rokka Chhetri, and Arquimedes Canedo. dyngraph2vec: Capturing Network Dynamics using Dynamic Graph Representation Learning. Knowledge-Based Systems, 2019. * [28] Carlos Henrique Gomes Ferreira, Fabricio Murai Ferreira, Breno de Sousa Matos, and Jussara Marques de Almeida. Modeling Dynamic Ideological Behavior in Political Networks. The Journal of Web Science, 7, 2019. * [29] Sandra Mitrovic and Jochen De Weerdt. Dyn2vec: Exploiting Dynamic Behaviour using Difference Networks-Based Node Embeddings for Classification. In Proceedings of the International Conference on Data Science, pages 194–200. CSREA Press, 2019. * [30] Lun Du, Yun Wang, Guojie Song, Zhicong Lu, and Junshan Wang. Dynamic Network Embedding: An Extended Approach for Skip-gram based Network Embedding. In IJCAI, pages 2086–2092, 2018. * [31] Sedigheh Mahdavi, Shima Khoshraftar, and Aijun An. Dynamic Joint Variational Graph Autoencoders. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pages 385–401. Springer, 2019. * [32] Ziwei Zhang, Peng Cui, Jian Pei, Xiao Wang, and Wenwu Zhu. TIMERS: Error-Bounded SVD Restart on Dynamic Networks. In Thirty-Second AAAI Conference on Artificial Intelligence, 2018\. * [33] Ryohei Hisano. Semi-Supervised Graph Embedding Approach to Dynamic Link Prediction. In International Workshop on Complex Networks, pages 109–121. Springer, 2018. * [34] Giang Hoang Nguyen, John Boaz Lee, Ryan A Rossi, Nesreen K Ahmed, Eunyee Koh, and Sungchul Kim. Continuous-Time Dynamic Network Embeddings. In Companion Proceedings of the The Web Conference 2018, pages 969–976. International World Wide Web Conferences Steering Committee, 2018. * [35] Maddalena Torricelli, Márton Karsai, and Laetitia Gauvin. weg2vec: Event Embedding for Temporal Networks. Scientific Reports, 10(1):1–11, 2020. * [36] Xi Liu, Ping-Chun Hsieh, Nick Duffield, Rui Chen, Muhe Xie, and Xidao Wen. Real-Time Streaming Graph Embedding Through Local Actions. In Companion Proceedings of The 2019 World Wide Web Conference, pages 285–293. ACM, 2019. * [37] Junchi Yan, Hongteng Xu, and Liangda Li. Modeling and Applications for Temporal Point Processes. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 3227–3228. ACM, 2019. * [38] Rakshit Trivedi, Hanjun Dai, Yichen Wang, and Le Song. Know-Evolve: Deep Temporal Reasoning for Dynamic Knowledge Graphs. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 3462–3471. JMLR. org, 2017. * [39] Yuan Zuo, Guannan Liu, Hao Lin, Jia Guo, Xiaoqian Hu, and Junjie Wu. Embedding Temporal Network via Neighborhood Formation. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 2857–2866. ACM, 2018. * [40] Boris Knyazev, Carolyn Augusta, and Graham W Taylor. Learning Temporal Attention in Dynamic Graphs with Bilinear Interactions. arXiv preprint arXiv:1909.10367, 2019. * [41] Hanjun Dai, Yichen Wang, Rakshit Trivedi, and Le Song. Deep Coevolutionary Network: Embedding User and Item Features for Recommendation. arXiv preprint arXiv:1609.03675, 2016. * [42] Weichang Wu, Huanxi Liu, Xiaohu Zhang, Yu Liu, and Hongyuan Zha. Modeling Event Propagation via Graph Biased Temporal Point Process. IEEE Transactions on Neural Networks and Learning Systems, 2020\. * [43] Tanay Kumar Saha, Thomas Williams, Mohammad Al Hasan, Shafiq Joty, and Nicholas K Varberg. Models for Capturing Temporal Smoothness in Evolving Networks for Learning Latent Representation of Nodes. arXiv preprint arXiv:1804.05816, 2018. * [44] Kai Lei, Meng Qin, Bo Bai, Gong Zhang, and Min Yang. GCN-GAN: A Non-Linear Temporal Link Prediction Model for Weighted Dynamic Networks. In IEEE INFOCOM 2019-IEEE Conference on Computer Communications, pages 388–396. IEEE, 2019. * [45] Jiajing Wu, Dan Lin, Zibin Zheng, and Qi Yuan. T-EDGE: Temporal wEighted MultiDiGraph Embedding for Ethereum Transaction Network Analysis. arXiv preprint arXiv:1905.08038, 2019. * [46] Ying Yin, Li-Xin Ji, Jian-Peng Zhang, and Yu-Long Pei. DHNE: Network Representation Learning Method for Dynamic Heterogeneous Networks. IEEE Access, 7:134782–134792, 2019. * [47] Aldo Pareja, Giacomo Domeniconi, Jie Chen, Tengfei Ma, Toyotaro Suzumura, Hiroki Kanezashi, Tim Kaler, Tao B. Schardl, and Charles E. Leiserson. EvolveGCN: Evolving graph convolutional networks for dynamic graphs. In Proceedings of the Thirty-Fourth AAAI Conference on Artificial Intelligence, 2020. * [48] Aynaz Taheri, Kevin Gimpel, and Tanya Berger-Wolf. Learning to Represent the Evolution of Dynamic Graphs with Recurrent Models. In Companion Proceedings of The 2019 World Wide Web Conference, pages 301–307, 2019. * [49] Aynaz Taheri and Tanya Berger-Wolf. Predictive Temporal Embedding of Dynamic Graphs. In Proceedings of the 2019 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining, pages 57–64, 2019. * [50] Hao Wei, Guyu Hu, Wei Bai, Shiming Xia, and Zhisong Pan. Lifelong Representation Learning in Dynamic Attributed Networks. Neurocomputing, 358:1–9, 2019. * [51] Bing Yu, Haoteng Yin, and Zhanxing Zhu. Spatio-Temporal Graph Convolutional Networks: A Deep Learning Framework for Traffic Forecasting. arXiv preprint arXiv:1709.04875, 2017. * [52] Youru Li, Zhenfeng Zhu, Deqiang Kong, Meixiang Xu, and Yao Zhao. Learning Heterogeneous Spatial-Temporal Representation for Bike-Sharing Demand Prediction. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 1004–1011, 2019. * [53] Songgaojun Deng, Huzefa Rangwala, and Yue Ning. Learning Dynamic Context Graphs for Predicting Social Events. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 1007–1016. ACM, 2019. * [54] Yuan Yuan, Xiaodan Liang, Xiaolong Wang, Dit-Yan Yeung, and Abhinav Gupta. Temporal Dynamic Graph LSTM for Action-Driven Video Object Detection. In Proceedings of the IEEE International Conference on Computer Vision, pages 1801–1810, 2017. * [55] Ryan A Rossi, Brian Gallagher, Jennifer Neville, and Keith Henderson. Modeling Dynamic Behavior in Large Evolving Graphs. In Proceedings of the Sixth ACM International Conference on Web Search and Data Mining, pages 667–676. ACM, 2013. * [56] Rishab Goel, Seyed Mehran Kazemi, Marcus Brubaker, and Pascal Poupart. Diachronic Embedding for Temporal Knowledge Graph Completion. arXiv preprint arXiv:1907.03143, 2019. * [57] Changping Meng, S Chandra Mouli, Bruno Ribeiro, and Jennifer Neville. Subgraph Pattern Neural Networks for High-Order Graph Evolution Prediction. In Thirty-Second AAAI Conference on Artificial Intelligence, 2018\. * [58] Mahmudur Rahman and Mohammad Al Hasan. Link Prediction in Dynamic Networks using Graphlet. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pages 394–409. Springer, 2016. * [59] V Dave and M Hasan. Triangle Completion Time Prediction using Time-Conserving Embedding, 2019. * [60] Wenchao Yu, Charu C Aggarwal, and Wei Wang. Temporally Factorized Network Modeling for Evolutionary Network Analysis. In Proceedings of the Tenth ACM International Conference on Web Search and Data Mining, pages 455–464. ACM, 2017. * [61] David Kempe, Jon Kleinberg, and Éva Tardos. Maximizing the spread of influence through a social network. In Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining, pages 137–146, 2003. * [62] Yujing Zhou, Weile Liu, Yang Pei, Lei Wang, Daren Zha, and Tianshu Fu. Dynamic Network Embedding by Semantic Evolution. In 2019 International Joint Conference on Neural Networks (IJCNN), pages 1–8. IEEE, 2019. * [63] Aravind Sankar, Yanhong Wu, Liang Gou, Wei Zhang, and Hao Yang. Dynamic Graph Representation Learning via Self-Attention Networks. arXiv preprint arXiv:1812.09430, 2018. * [64] Wenchao Yu, Wei Cheng, Charu C Aggarwal, Haifeng Chen, and Wei Wang. Link Prediction with Spatial and Temporal Consistency in Dynamic Networks. In IJCAI, pages 3343–3349, 2017. * [65] G. W. Stewart. Matrix Perturbation Theory, 1990. * [66] Evrim Acar and Bülent Yener. Unsupervised Multiway Data Analysis: A Literature Survey. IEEE Trans Knowl Data Eng, 21(1):6–20, 2008. * [67] Dimitrios Rafailidis and Alexandros Nanopoulos. Modeling the Dynamics of User Preferences in Coupled Tensor Factorization. In Proceedings of the 8th ACM Conference on Recommender systems, pages 321–324. ACM, 2014. * [68] Xiaomin Fang, Rong Pan, Guoxiang Cao, Xiuqiang He, and Wenyuan Dai. Personalized Tag Recommendation through Nonlinear Tensor Factorization using Gaussian Kernel. In Twenty-Ninth AAAI Conference on Artificial Intelligence, 2015\. * [69] Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini. The Graph Neural Network Model. IEEE Transactions on Neural Networks, 20(1):61–80, 2008. * [70] Mathias Niepert, Mohamed Ahmed, and Konstantin Kutzkov. Learning Convolutional Neural Networks for Graphs. In International Conference on Machine Learning, pages 2014–2023, 2016. * [71] Daixin Wang, Peng Cui, and Wenwu Zhu. Structural Deep Network Embedding. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 1225–1234. ACM, 2016. * [72] Jinyin Chen, Xuanheng Xu, Yangyang Wu, and Haibin Zheng. GC-LSTM: Graph Convolution Embedded LSTM for Dynamic Link Prediction. arXiv preprint arXiv:1812.04206, 2018. * [73] Sepp Hochreiter and Jürgen Schmidhuber. Long Short-Term Memory. Neural Computation, 9(8):1735–1780, 1997. * [74] Kyunghyun Cho, Bart Van Merriënboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation. arXiv preprint arXiv:1406.1078, 2014. * [75] Jinyin Chen, Jian Zhang, Xuanheng Xu, Chenbo Fu, Dan Zhang, Qingpeng Zhang, and Qi Xuan. E-LSTM-D: A Deep Learning Framework for Dynamic Network Link Prediction. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 2019\. * [76] Yujia Li, Daniel Tarlow, Marc Brockschmidt, and Richard Zemel. Gated Graph Sequence Neural Networks. arXiv preprint arXiv:1511.05493, 2015. * [77] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural Machine Translation by Jointly Learning to Align and Translate. arXiv preprint arXiv:1409.0473, 2014. * [78] Dongkuan Xu, Wei Cheng, Dongsheng Luo, Xiao Liu, and Xiang Zhang. Spatio-Temporal Attentive RNN for Node Classification in Temporal Attributed Graphs. In Proceedings of the 28th International Joint Conference on Artificial Intelligence, pages 3947–3953. AAAI Press, 2019. * [79] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. ImageNet Classification with Deep Convolutional Neural Networks. In Advances in Neural Information Processing Systems, pages 1097–1105, 2012. * [80] Thomas N Kipf and Max Welling. Semi-Supervised Classification with Graph Convolutional Networks. arXiv preprint arXiv:1609.02907, 2016. * [81] Will Hamilton, Zhitao Ying, and Jure Leskovec. Inductive Representation Learning on Large Graphs. In Advances in Neural Information Processing Systems, pages 1024–1034, 2017. * [82] Yuyang Gao, Lingfei Wu, Houman Homayoun, and Liang Zhao. DynGraph2Seq: Dynamic-Graph-to-Sequence Interpretable Learning for Health Stage Prediction in Online Health Forums. In IEEE ICDM, pages 1042–1047, 2019. * [83] Stephen Bonner, John Brennan, Ibad Kureshi, Georgios Theodoropoulos, Andrew Stephen McGough, and Boguslaw Obara. Temporal Graph Offset Reconstruction: Towards Temporally Robust Graph Representation Learning. In 2018 IEEE International Conference on Big Data (Big Data), pages 3737–3746. IEEE, 2018. * [84] Ling Zhao, Yujiao Song, Chao Zhang, Yu Liu, Pu Wang, Tao Lin, Min Deng, and Haifeng Li. T-GCN: A Temporal Graph Convolutional Network for Traffic Prediction. IEEE Transactions on Intelligent Transportation Systems, 2019. * [85] Yun Xiong, Yao Zhang, Hanjie Fu, Wei Wang, Yangyong Zhu, and S Yu Philip. DynGraphGAN: Dynamic Graph Embedding via Generative Adversarial Networks. In International Conference on Database Systems for Advanced Applications, pages 536–552. Springer, 2019. * [86] Diederik P Kingma and Max Welling. Auto-Encoding Variational Bayes. arXiv preprint arXiv:1312.6114, 2013. * [87] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative Adversarial Nets. In Advances in Neural Information Processing Systems, pages 2672–2680, 2014. * [88] Thomas N Kipf and Max Welling. Variational Graph Auto-Encoders. arXiv preprint arXiv:1611.07308, 2016. * [89] Hongwei Wang, Jia Wang, Jialin Wang, Miao Zhao, Weinan Zhang, Fuzheng Zhang, Xing Xie, and Minyi Guo. GraphGAN: Graph Representation Learning with Generative Adversarial Nets. In Thirty-Second AAAI Conference on Artificial Intelligence, 2018\. * [90] Stephen Bonner, Amir Atapour-Abarghouei, Philip T Jackson, John Brennan, Ibad Kureshi, Georgios Theodoropoulos, Andrew Stephen McGough, and Boguslaw Obara. Temporal Neighbourhood Aggregation: Predicting Future Links in Temporal Graphs via Recurrent Variational Graph Convolutions. In 2019 IEEE International Conference on Big Data (Big Data), pages 5336–5345. IEEE, 2019. * [91] Yifeng Zhao, Xiangwei Wang, Hongxia Yang, Le Song, and Jie Tang. Large Scale Evolving Graphs with Burst Detection. In 28th International Joint Conference on Artificial Intelligence (IJCAI), 2019. * [92] Ehsan Hajiramezanali, Arman Hasanzadeh, Krishna Narayanan, Nick Duffield, Mingyuan Zhou, and Xiaoning Qian. Variational Graph Recurrent Neural Networks. In Advances in Neural Information Processing Systems, pages 10700–10710, 2019. * [93] Yao Ma, Ziyi Guo, Zhaocun Ren, Jiliang Tang, and Dawei Yin. Streaming Graph Neural Networks. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 719–728, 2020. * [94] Dongkuan Xu, Wei Cheng, Dongsheng Luo, Yameng Gu, Xiao Liu, Jingchao Ni, Bo Zong, Haifeng Chen, and Xiang Zhang. Adaptive Neural Network for Node Classification in Dynamic Networks. In 2019 IEEE International Conference on Data Mining (ICDM), pages 1402–1407. IEEE, 2019. * [95] Dongkuan Xu, Junjie Liang, Wei Cheng, Hua Wei, Haifeng Chen, and Xiang Zhang. Transformer-Style Relational Reasoning with Dynamic Memory Updating for Temporal Network Modeling. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 4546–4554, 2021. * [96] Hogun Park and Jennifer Neville. Exploiting Interaction Links for Node classification with deep graph neural networks. In Proceedings of the 28th International Joint Conference on Artificial Intelligence, pages 3223–3230. AAAI Press, 2019. * [97] Prasha Shrestha, Suraj Maharjan, Dustin Arendt, and Svitlana Volkova. Learning from Dynamic User Interaction Graphs to Forecast Diverse Social Behavior. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management, pages 2033–2042. ACM, 2019. * [98] Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient Estimation of Word Representations in Vector Space. arXiv preprint arXiv:1301.3781, 2013. * [99] Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed Representations of Words and Phrases and their Compositionality. In Advances in Neural Information Processing Systems, pages 3111–3119, 2013. * [100] Sam De Winter, Tim Decuypere, Sandra Mitrović, Bart Baesens, and Jochen De Weerdt. Combining Temporal Aspects of Dynamic Networks with Node2Vec for a More Efficient Dynamic Link Prediction. In 2018 IEEE/ACM ASONAM, pages 1234–1241. IEEE, 2018. * [101] Uriel Singer, Ido Guy, and Kira Radinsky. Node embedding over temporal graphs. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI-19, pages 4605–4612. International Joint Conferences on Artificial Intelligence Organization, 7 2019\. * [102] Sedigheh Mahdavi, Shima Khoshraftar, and Aijun An. dynnode2vec: Scalable dynamic network embedding. In 2018 IEEE International Conference on Big Data (Big Data), pages 3762–3765. IEEE, 2018. * [103] Robert Bamler and Stephan Mandt. Dynamic Word Embeddings. arXiv preprint arXiv:1702.08359, 2017. * [104] Farzaneh Heidari and Manos Papagelis. EvoNRL: Evolving Network Representation Learning Based on Random Walks. In International Conference on Complex Networks and their Applications, pages 457–469. Springer, 2018. * [105] Hooman Peiro Sajjad, Andrew Docherty, and Yuriy Tyshetskiy. Efficient Representation Learning using Random Walks for Dynamic Graphs. arXiv preprint arXiv:1901.01346, 2019. * [106] Wenchao Yu, Wei Cheng, Charu C Aggarwal, Kai Zhang, Haifeng Chen, and Wei Wang. NetWalk: A Flexible Deep Embedding Approach for Anomaly Detection in Dynamic Networks. In Proceedings of the 24th ACM SIGKDD Int. Conference on Knowledge Discovery & Data Mining, pages 2672–2681, 2018. * [107] Supriya Pandhre, Himangi Mittal, Manish Gupta, and Vineeth N Balasubramanian. STWalk: Learning Trajectory Representations in Temporal Graphs. In Proceedings of the ACM India Joint International Conference on Data Science and Management of Data, pages 210–219. ACM, 2018. * [108] Shima Khoshraftar, Sedigheh Mahdavi, Aijun An, Yonggang Hu, and Junfeng Liu. Dynamic Graph Embedding via LSTM History Tracking. In 2019 IEEE International Conference on Data Science and Advanced Analytics (DSAA), pages 119–127. IEEE, 2019. * [109] Zhitao Wang, Chengyao Chen, and Wenjie Li. Attention network for information diffusion prediction. In Companion Proceedings of the The Web Conference 2018, pages 65–66, 2018. * [110] Jia Wang, Vincent W Zheng, Zemin Liu, and Kevin Chen-Chuan Chang. Topological Recurrent Neural Network for Diffusion Prediction. In IEEE ICDM, pages 475–484, 2017. * [111] Cheng Yang, Jian Tang, Maosong Sun, Ganqu Cui, and Zhiyuan Liu. Multi-Scale Information Diffusion Prediction with Reinforced Recurrent Networks. In Proceedings of the 28th International Joint Conference on Artificial Intelligence, pages 4033–4039. AAAI Press, 2019. * [112] Koya Sato, Mizuki Oka, Alain Barrat, and Ciro Cattuto. DyANE: Dynamics-Aware Node Embedding for Temporal Networks. arXiv preprint arXiv:1909.05976, 2019. * [113] Ferenc Béres, Domokos M Kelen, Róbert Pálovics, and András A Benczúr. Node Embeddings in Dynamic Graphs. Applied Network Science, 4(1):64, 2019. * [114] Zhining Liu, Dawei Zhou, and Jingrui He. Towards Explainable Representation of Time-Evolving Graphs via Spatial-Temporal Graph Attention Networks. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management, pages 2137–2140. ACM, 2019. * [115] Tianxing Wu, Arijit Khan, Huan Gao, and Cheng Li. Efficiently Embedding Dynamic Knowledge Graphs. arXiv preprint arXiv:1910.06708, 2019. * [116] Tingsong Jiang, Tianyu Liu, Tao Ge, Lei Sha, Sujian Li, Baobao Chang, and Zhifang Sui. Encoding Temporal Information for Time-Aware Link Prediction. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2350–2354, 2016. * [117] Antoine Bordes, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston, and Oksana Yakhnenko. Translating Embeddings for Modeling Multi-Relational Data. In Advances in Neural Information Processing Systems, pages 2787–2795, 2013. * [118] Omer Levy and Yoav Goldberg. Neural Word Embedding as Implicit Matrix Factorization. In Advances in Neural Information Processing Systems, pages 2177–2185, 2014. * [119] Jian Tang, Meng Qu, Mingzhe Wang, Ming Zhang, Jun Yan, and Qiaozhu Mei. LINE: Large-Scale Information Network Embedding. In Proceedings of the 24th International Conference on World Wide Web, pages 1067–1077. International World Wide Web Conferences Steering Committee, 2015. * [120] Xin Liu, Tsuyoshi Murata, Kyoung-Sook Kim, Chatchawan Kotarasu, and Chenyi Zhuang. A General View for Network Embedding as Matrix Factorization. In Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining, pages 375–383. ACM, 2019. * [121] Mahmudur Rahman, Tanay Kumar Saha, Mohammad Al Hasan, Kevin S Xu, and Chandan K Reddy. DyLink2Vec: Effective Feature Representation for Link Prediction in Dynamic Networks. arXiv preprint arXiv:1804.05755, 2018. * [122] Yuanfu Lu, Xiao Wang, Chuan Shi, Philip S Yu, and Yanfang Ye. Temporal Network Embedding with Micro-and Macro-dynamics. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management, pages 469–478. ACM, 2019. * [123] Ying Yin, Jianpeng Zhang, Yulong Pei, Xiaotao Cheng, and Lixin Ji. MHDNE: Network Embedding Based on Multivariate Hawkes Process. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pages 409–421. Springer, 2019. * [124] Ryan A Rossi. Relational Time Series Forecasting. The Knowledge Engineering Review, 33, 2018. * [125] Yulong Pei, Jianpeng Zhang, GH Fletcher, and Mykola Pechenizkiy. Node Classification in Dynamic Social Networks. Proceedings of AALTD, page 54, 2016. * [126] Ruthwik R Junuthula, Kevin S Xu, and Vijay K Devabhaktuni. Evaluating link prediction accuracy in dynamic networks with added and removed edges. In 2016 IEEE international conferences on big data and cloud computing (BDCloud), social computing and networking (SocialCom), sustainable computing and communications (SustainCom)(BDCloud-SocialCom-SustainCom), pages 377–384. IEEE, 2016. * [127] Sylvain Lamprier. A Variational Topological Neural Model for Cascade-based Diffusion in Networks. arXiv preprint arXiv:1812.10962, 2018. * [128] Yuan Zhang, Tianshu Lyu, and Yan Zhang. COSINE: Community-Preserving Social Network Embedding from Information Diffusion Cascades. In Thirty-Second AAAI Conference on Artificial Intelligence, 2018\. * [129] Laurens van der Maaten and Geoffrey Hinton. Visualizing Data using t-sne. Journal of Machine Learning Research, 9(Nov):2579–2605, 2008. * [130] Palash Goyal, Sujit Rokka Chhetri, Ninareh Mehrabi, Emilio Ferrara, and Arquimedes Canedo. DynamicGEM: A Library for Dynamic Graph Embedding Methods. arXiv preprint arXiv:1811.10734, 2018. * [131] Yifan Feng, Haoxuan You, Zizhao Zhang, Rongrong Ji, and Yue Gao. Hypergraph Neural Networks. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 3558–3565, 2019. * [132] Zizhao Zhang, Haojie Lin, Yue Gao, and KLISS BNRist. Dynamic Hypergraph Structure Learning. In IJCAI, pages 3162–3169, 2018. * [133] Xiaodong Cheng, Yu Kawano, and Jacquelien MA Scherpen. Graph Structure-Preserving Model Reduction of Linear Network Systems. In 2016 European Control Conference (ECC), pages 1970–1975. IEEE, 2016.
32k
arxiv_papers
2101.01231
∎ 11institutetext: Andrew J. Christlieb 22institutetext: Michigan State University Department of Computational Mathematics, Science and Engineering 428 S. Shaw Lane East Lansing, Michigan 48824, USA 22email: [email protected] 33institutetext: Pierson T. Guthrey 44institutetext: Weapons and Complex Integration Lawrence Livermore National Laboratory Livermore, CA 94550, USA Tel.: +337-781-5574 44email: [email protected] 55institutetext: James A. Rossmanith 66institutetext: Iowa State University Department of Mathematics 411 Morrill Road Ames, Iowa 50011, USA 66email: [email protected] # Parallel Scaling of the Regionally-Implicit Discontinuous Galerkin Method with Quasi-Quadrature-Free Matrix Assembly ††thanks: This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. Scaling studies were performed at Institute for Cyber-Enabled Research at Michigan State University and at Livermore Computing at Lawrence Livermore National Laboratory. This work was funded in part by ONR High Order Scalable Solvers grant N0014-19-1-2476, and AFOSR Computational Non-Ideal Plasma Physics grant FA9550-17-1-0394, and DoE SciDAC TEAMS grant DE-SC0017955. JAR was supported in part by NSF Grants DMS–1620128 and DMS–2012699. Andrew J. Christlieb Pierson T. Guthrey111Corresponding Author James A. Rossmanith (Received: date / Accepted: date) ###### Abstract In this work we investigate the parallel scalability of the numerical method developed in Guthrey and Rossmanith [The regionally implicit discontinuous Galerkin method: Improving the stability of DG-FEM, SIAM J. Numer. Anal. (2019)]. We develop an implementation of the regionally-implicit discontinuous Galerkin (RIDG) method in DoGPack, which is an open source C++ software package for discontinuous Galerkin methods. Specifically, we develop and test a hybrid OpenMP and MPI parallelized implementation of DoGPack with the goal of exploring the efficiency and scalability of RIDG in comparison to the popular strong stability-preserving Runge-Kutta discontinuous Galerkin (SSP- RKDG) method. We demonstrate that RIDG methods are able to hide communication latency associated with distributed memory parallelism, due to the fact that almost all of the work involved in the method is highly localized to each element, producing a localized prediction for each region. We demonstrate the enhanced efficiency and scalability of the of the RIDG method and compare it to SSP-RKDG methods and show extensibility to very high order schemes. The two-dimensional scaling study is performed on machines at the Institute for Cyber-Enabled Research at Michigan State University, using up to 1440 total cores on Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz CPUs. The three-dimensional scaling study is performed on Livermore Computing clusters at at Lawrence Livermore National Laboratory, using up to 28672 total cores on Intel Xeon CLX-8276L CPUs with Omni-Path interconnects. ###### Keywords: discontinuous Galerkin hyperbolic conservation laws Courant-Friedrichs-Lewy condition time-stepping numerical stability strong scaling high performance computing domain decomposition quadrature-free ###### MSC: 65M12 65M60 65Y20 35L03 ††journal: BIT ## 1 Introduction In high-performance computing, parallel scaling is an important measure of algorithm efficiency, since it provides a quantifiable measure of how much computational benefit is gained by moving to more and more computing resources (i.e., more processors/cores/threads). There are two standard approaches measure scalability: (1) weak scaling, where the problem size (i.e., the degrees of freedom) increases with the number of cores such that the problem size per processor/core/thread is fixed; and (2) strong scaling, where the number of processors/cores/threads is increased while the problem size is fixed. For problems that are CPU-bound, i.e., problems where memory is not the primary concern, but computing the solution on a single processor takes a long time, the relevant scaling measure is strong scaling. For the numerical solution of partial differential equations (PDEs), the standard strategy for strong scaling is to subdivide a problem of size $P$ over $n$ physical compute cores until a minimal problem size per physical compute core is reached (e.g., see Fischer, Heisey, and Min Fischer ). The ratio of problem size to number of compute cores is known as the granularity, $\eta$, and can be used to predict the ratio of MPI-communication and OpenMP overhead to parallelized work. In the limit of minimal granularity $\eta\to 1$, parallelization overhead is maximized, reducing efficiency. In this work we are concerned with strong scaling via a hybrid OpenMP openmp \+ MPI Gabriel2004 approach for a specific class of problems: hyperbolic conservation laws, with a specific type of spatial discretization: discontinuous Galerkin (DG) finite element methods (e.g., see Cockburn and Shu cockshu5 ). The most common technique to time-advance DG spatial discretizations is the strong-stability-preserving Runge-Kutta (SSP-RK) scheme. This approach is explicit in time, and thus very efficient in terms of CPU and memory resources per time-step. However, a drawback is that SSP-RK schemes applied to high-order DG methods suffer from very small allowable time-steps, which then results in an overall reduction in efficiency (i.e., many time-steps to reach a fixed final time). In order to overcome these small time-step restrictions, Guthrey and Rossmanith RIDG_paper_2019 developed a new time-stepping approach for DG that was dubbed the regionally-implicit DG (RIDG) scheme. The primary goal of the current work is to develop, implement, and test a parallelized version of the regionally-implicit discontinuous Galerkin (RIDG) method. ### 1.1 Scalability challenges The algorithms discussed in this paper are high-order discontinuous Galerkin (DG) methods with explicit time-stepping for hyperbolic conservation laws. Explicit time-stepping algorithms for DG methods are usually associated with small time-steps, typically with Courant-Friedrichs-Lewy (CFL) numbers much less than $1$ (and indeed inversely proportional to the method order of accuracy). The benefit of such schemes is that they have a very low computational cost associated with the (typically) nearest neighbor stenciling. Thus, there are two challenges associated with attempts to scale such schemes to multiple processors. First, as we divide the computational domain into several pieces, each piece must synchronize with its neighbors across the domain-decomposition pseudo-boundaries, a process also known as synchronization of the halo regions Consortium2017 . This must occur at least once per time-step (typically several times for multi-stage methods), and thus must be performed many times over the course of forming a solution for some fixed final time. Secondly, for each time-step all processes must synchronize and compute the global maximum wave speed, so that each process is using the same time-step size given by the CFL-restriction. This synchronization takes the form of an all-to-all reduction. This can be potentially avoided by using local-time stepping, but we do not discuss this in this paper. In the face of the challenges, discontinuous Galerkin methods enjoy enhanced scalability properties due to the nearest-neighbor stenciling typical of such methods and the highly element-localized compute intensity. Adaptive mesh refinement has been shown to work very well when applied to DG methods at scale Dumbser2013a . The arbitrary high order schemes using derivatives (ADER- DG) are predictor-corrector schemes that avoid repeated communication for each time-step by being fully discrete Dumbser2018 . That is, there is a space-time basis underlying the time-step updates as opposed to a method-of-lines update of a purely spatial basis. This approach shows improved MPI scaling efficiency compared to RKDG schemes, resulting in up to $23\%$ smaller $L_{2}$ errors with $1-2\%$ longer runtimes. One goal of this paper is to perform a similar comparison of RIDG and RKDG methods in this paper. For discontinuous Galerkin methods, the granularity $\eta$ is usually discussed in terms of the total number of elements $P$, as opposed to the total number of problem degrees of freedom. This is because domain- decomposition is not typically implemented below the level of the elements. That is, individual elements are not typically broken up as any part of a domain decomposition strategy. Domain decomposition for strong scaling such a problem involves dividing the physical domain into a number of sub-domains equal to the number of MPI tasks or compute nodes, each with a number of physical compute cores. This gives rise to domain decomposition pseudo-boundaries, in addition to the physical boundaries. In this paper we consider periodic boundary conditions, where physical boundaries become pseudo-boundaries, as they separate problem subdomains. As discussed above, dynamic explicit time-stepping relies on an all-to-all reduction to ensure that each time-step taken uses a global estimate for the maximum wave speed in the computational domain. That is, each subdomain assigned to an MPI task must compute its “task-local” maximum wave speed, and a “task-global” maximum wave speed must be computed via an all-to-all reduction to ensure that the Courant-Friedrichs-Lewy (CFL) condition is satisfied by all of the sub-domains. Since this procedure merely involves comparing wave speeds to find the maximum wave speed among all processes, the communication latency of this procedure is dominated by the total number of processes as $\log_{2}P$, not the granularity $\eta$. Every subdomain must perform halo-region communications with all subdomains with which it shares a pseudo-boundary. This implies that periodic boundary conditions are more difficult to scale than Dirichlet or outflow conditions, since they inherently involve extra communications, specifically the wrap- around communications (e.g., the east side of the mesh communicates with the west side of the mesh). As the granularity increases, the time spent copying data to MPI buffers relative to the work in the rest of the problem increases, potentially degrading efficiency. However, this is a memory operation and thus can be relatively quick, even at low granularity. Explicit time-stepping methods usually exhibit restrictive CFL conditions (even more-so with high-order DG spatial discretizations), which increase the total number of time steps, which in turn increases the total amount of communications per simulation time. Thus, the scalability of an explicit time- stepping scheme depends on the scheme’s ability to overcome communication overheads and hide latency as $\eta\to 1$. ### 1.2 Scope of this work The purpose of the current work is to develop, implement, and study an OpenMP + MPI parallelization of the regionally implicit discontinuous Galerkin scheme (RIDG) Guthrey2017 ; RIDG_paper_2019 . In particular, we are concerned with comparing it to parallel implementations of the most commonly used variant of the discontinuous Galerkin method: the strong-stability-preserving Runge-Kutta DG (SSP-RKDG) scheme cockshu5 ; article:GoShu98 ; gottliebShuTadmor01 . In section 2 we will briefly review the regionally-implicit method, and in particular, the two key parts of the time-stepping strategy: (1) regionally- implicit prediction and (2) explicit correction. In section 3 we develop a quasi-quadrature-free approach for the Jacobian matrix assembly that is required in the regionally-implicit prediction step. Next we briefly define the terminology needed to quantify the accuracy and efficiency of our parallel implementations in section 4. A brief discussion and explanation of the proposed parallel implementation is provided in section 5. In section 6.1, section 6.2, and section 6.3 we report our findings for examples in one, two, and three spatial dimensions, respectively. The two-dimensional scaling study presented in section 6.2 is performed on the Institute for Cyber-Enabled Research at Michigan State University’s Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz CPUs, using up to 1440 total cores. The three-dimensional scaling study presented in section 6.3 is performed on Livermore Computing clusters at at Lawrence Livermore National Laboratory, using up to 28672 total cores on Intel Xeon CLX-8276L CPUs with Omni-Path interconnects. In these sections we demonstrate the benefits of using RIDG over SSP-RKDG, especially as the number of processors increase and as the order of the method increases. Finally, we conclude our findings in section 7. ## 2 Regionally-implicit discontinuous Galerkin (RIDG) method $y$$x$${\underline{\Psi}}^{T}\,{\underline{\underline{\widehat{W}}}}_{\,i-1\,j-1}$${\underline{\Psi}}^{T}\,{\underline{\underline{\widehat{W}}}}_{\,i\,j-1}$${\underline{\Psi}}^{T}\,{\underline{\underline{\widehat{W}}}}_{\,i+1\,j-1}$${\underline{\Psi}}^{T}\,{\underline{\underline{\widehat{W}}}}_{\,i-1\,j}$${\underline{\Psi}}^{T}\,{\underline{\underline{W}}}_{ij}$${\underline{\Psi}}^{T}\,{\underline{\underline{\widehat{W}}}}_{\,i+1\,j}$${\underline{\Psi}}^{T}\,{\underline{\underline{\widehat{W}}}}_{\,i-1\,j+1}$${\underline{\Psi}}^{T}\,{\underline{\underline{\widehat{W}}}}_{\,i\,j+1}$${\underline{\Psi}}^{T}\,{\underline{\underline{\widehat{W}}}}_{\,i+1\,j+1}$interior fluxproper upwind flux Figure 1: Shown is the 2D Cartesian region ${\mathcal{R}}_{i}$ over which the RIDG prediction step is carried out. In the RIDG prediction step, all of the states, excepting only the one belonging to the middle element, are only temporary variables and will be discarded once the predicted solution in element $ij$ has been computed – to make note of this we place hats over the temporary variables. The regionally implicit discontinuous Galerkin (RIDG) method was developed in recent work by Guthrey and Rossmanith RIDG_paper_2019 . The goal of that work was to improve the linear stability of the the ADER-DG scheme article:Dumbser2006 ; article:GasDumHinMun2011 ; article:Zanotti2015 . In particular, the origin of the RIDG method is tied to the method of Gassner et al. article:GasDumHinMun2011 , where it was shown that Lax-Wendroff discontinuous Galerkin Qiu2005a can be formulated as a predictor-corrector method. The predictor is a local version of a spacetime DG method article:KlaVegVen2006 ; article:Sudirham2006 (i.e., the predictor is something like a block-Jacobi update for a fully implicit spacetime DG method), and the corrector is an explicit method that uses the spacetime reconstructed solution from the predictor step. The drawback of the locally- implicit approach is that the resulting ADER-DG scheme suffers from small maximum allowable time-steps that decrease with increasing order of accuracy. The RIDG method instead uses a regionally-implicit prediction step, which has the benefit of resulting in a DG method with maximum allowable time-steps that are much closer to the optimal limit; and, in particular, the maximum allowed time-step does not degrade as the order of accuracy is increased. The drawback of RIDG is that per time-step it is more computational expensive than standard ADER-DG; however, this penalty is more than offset by the significantly increased stable time-step, especially for very high-order schemes. In this section we briefly review the key concepts of the RIDG approach. For more details we refer the reader to RIDG_paper_2019 . ### 2.1 General setup Consider hyperbolic conservation laws of the form ${\underline{q}}_{,t}+{\underline{\nabla}}\cdot{\underline{\underline{F}}}\left({\underline{q}}\right)={\underline{0}},$ (1) where ${\underline{q}}\left(t,{\underline{x}}\right):\mathbb{R}^{+}\times\mathbb{R}^{M_{\text{dim}}}\mapsto\mathbb{R}^{M_{\text{eqn}}}$ is the vector of conserved variables, ${\underline{\underline{F}}}\left({\underline{q}}\right):\mathbb{R}^{M_{\text{eqn}}}\mapsto\mathbb{R}^{{M_{\text{eqn}}}\times{M_{\text{dim}}}}$ is the flux function, ${M_{\text{dim}}}$ is the number of spatial dimensions, and ${M_{\text{eqn}}}$ is the number of conserved variables. We define $\Omega\subset\mathbb{R}^{M_{\text{dim}}}$ to be a polygonal domain with boundary $\partial\Omega$, and discretize $\Omega$ using a finite set of non- overlapping elements, ${\mathcal{T}}_{i}$, such that $\cup_{i=1}^{M_{\text{elem}}}{\mathcal{T}}_{i}=\Omega$, where ${M_{\text{elem}}}$ is the total number of elements. Let ${\mathbb{Q}}\left({M_{\text{deg}}},{M_{\text{dim}}}\right)$ denote the set of polynomials from $\mathbb{R}^{M_{\text{dim}}}$ to $\mathbb{R}$ with maximal polynomial degree ${M_{\text{deg}}}$. On the mesh of ${M_{\text{elem}}}$ elements we define the broken finite element space: ${\mathcal{W}}^{h}:=\left\\{{\underline{w}}^{h}\in\left[L^{\infty}(\Omega)\right]^{{M_{\text{eqn}}}}:\,{\underline{w}}^{h}\bigl{|}_{{\mathcal{T}}_{i}}\in\left[{\mathbb{Q}}\left({M_{\text{deg}}},{M_{\text{dim}}}\right)\right]^{{M_{\text{eqn}}}}\,\,\forall{\mathcal{T}}_{i}\right\\},$ where $h$ is the grid spacing. Let $\Phi_{k}\left({\underline{x}}\right)$ for $k=1,\ldots,M_{\text{basis}}$ be a basis that spans ${\mathbb{Q}}\left({M_{\text{deg}}},{M_{\text{dim}}}\right)$ over ${\mathcal{T}}_{i}$ and is orthonormal: $\frac{1}{|{\mathcal{T}}_{i}|}\int_{{\mathcal{T}}_{i}}\Phi_{k}\left({\underline{x}}\right)\Phi_{\ell}\left({\underline{x}}\right)\,d{\underline{x}}=\delta_{k\ell},$ where $\delta_{k\ell}$ is the Kronecker delta and $|{\mathcal{T}}_{i}|$ is the volume of element ${\mathcal{T}}_{i}$. In order to setup the RIDG method we assume that on each element the solution is of the following form: ${\underline{q}}^{h}\left(t^{n},{\underline{x}}\right)\Bigl{|}_{{\mathcal{T}}_{i}}={\underline{\Phi}}\left({\underline{x}}\right)^{T}{\underline{\underline{Q}}}_{i}=\sum_{\ell=1}^{M_{\text{basis}}}{\underline{Q}}_{i}^{\ell}\left(t^{n}\right)\,\Phi_{\ell}\left({\underline{x}}\right),$ (2) where ${\underline{Q}}_{i}^{\ell}(t^{n})$ are the unknown degrees of freedom at time level $t^{n}$. Using this ansatz, multiplying eq. 1 by $\varphi_{k}$, integrating over the element ${\mathcal{T}}_{i}$, using integration-by-parts in space, and integrating over the time slab $\left[t^{n},t^{n+1}\right]$, results in the following equation: $\begin{split}{\underline{Q}}_{i}^{k}\left(t^{n+1}\right)&={\underline{Q}}_{i}^{k}\left(t^{n}\right)+\frac{1}{|{\mathcal{T}}_{i}|}\int_{t^{n}}^{t^{n+1}}\int_{{\mathcal{T}}_{i}}{\underline{\nabla}}\Phi_{k}\cdot{\underline{\underline{F}}}\left(\,{\underline{q}}^{h}\right)\,d{\underline{x}}\,dt\\\ &-\frac{1}{|{\mathcal{T}}_{i}|}\int_{t^{n}}^{t^{n+1}}\oint_{\partial{\mathcal{T}}_{i}}\Phi_{k}\,{\underline{{\mathcal{F}}}}\left({\underline{q}}^{h}_{+},{\underline{q}}^{h}_{-};{\underline{n}}\right)\,d{\underline{s}}\,dt,\end{split}$ (3) where ${\underline{n}}$ is an outward-pointing normal vector to $\partial{\mathcal{T}}_{i}$, ${\underline{q}}^{h}_{+}$ and ${\underline{q}}^{h}_{-}$ are the states on either side of the boundary $\partial{\mathcal{T}}_{i}$, and ${\underline{{\mathcal{F}}}}$ is some appropriate numerical flux. Equation eq. 3 has the look and feel of a fully- discrete numerical method; however, the above volume and surface integral cannot be evaluated directly since they require knowledge of the solution over the entire time-slab: $\left[t^{n},t^{n+1}\right]$. The idea of schemes such as ADER-DG article:Dumbser2006 ; article:GasDumHinMun2011 ; article:Zanotti2015 , its close cousin Lax-Wendroff DG Qiu2005a , and indeed RIDG RIDG_paper_2019 , is to form a prediction for solution ${\underline{q}}^{h}$ over the time-slab: $\left[t^{n},t^{n+1}\right]$, and then to insert this prediction into equation eq. 3 to obtain a fully discrete update formula. The main difference between ADER-DG, Lax-Wendroff DG, and RIDG is only in how the prediction step is formed. We briefly review the details for RIDG below. ### 2.2 RIDG prediction step In order to describe the prediction step, we define spacetime elements and regions: ${\mathcal{S}}_{i}:=\left[t^{n},\,t^{n+1}\right]\times{\mathcal{T}}_{i}\qquad\text{and}\qquad{\mathcal{R}}_{i}:=\bigcup_{j:{\mathcal{T}}_{i}\cap{\mathcal{T}}_{j}\neq\emptyset}{\mathcal{S}}_{j},$ (4) respectively. Note that definition eq. 4 includes all of the elements ${\mathcal{T}}_{j}$ that share either a face or a vertex with element ${\mathcal{T}}_{i}$ – we refer to all of the elements in ${\mathcal{R}}_{i}$ as vertex-neighbors of element ${\mathcal{T}}_{i}$; a 2D version is depicted in fig. 1 . Also note that on a Cartesian mesh in ${M_{\text{dim}}}$ dimensions, region ${\mathcal{R}}_{i}$ contains exactly $3^{{M_{\text{dim}}}}$ spacetime elements. On each spacetime element we write the predicted solution as ${\underline{q}}\left(t,{\underline{x}}\right)\Bigl{|}_{{\mathcal{S}}_{i}}\approx w_{i}:={\underline{\Psi}}^{T}{\underline{\underline{W}}}_{i},$ (5) where ${\underline{\underline{W}}}_{i}\in\mathbb{R}^{{M_{\text{P}}}\times{M_{\text{eqn}}}}$, ${\underline{\Psi}}\in\mathbb{R}^{{M_{\text{P}}}}$, ${M_{\text{P}}}:=\left({M_{\text{deg}}}+1\right)^{{M_{\text{dim}}}+1},$ and ${\underline{\Psi}}$ is an orthonormal basis on the spacetime element: $\frac{1}{\bigl{|}{\mathcal{S}}_{i}\bigr{|}}\int_{S_{i}}{\underline{\Psi}}\,{\underline{\Psi}}^{T}\,d\tau\,d\xi={\underline{\underline{\mathbb{I}}}}\in\mathbb{R}^{{M_{\text{P}}}\times{M_{\text{P}}}}.$ In order to arrive at an algebraic system of equations to solve for the unknown spacetime coefficients in region ${\mathcal{R}}_{i}$, we multiply eq. 1 by test functions ${\underline{\Psi}}$ and then integrate over each ${\mathcal{S}}_{j}\in{\mathcal{R}}_{i}$. We apply integration-by-parts in both space and time. The integration-by-parts in time connects the current time- slab $[t^{n},t^{n+1}]$ to the solution in the previous time-slab (i.e., this enforces causality). In space we treat element boundaries differently depending on whether or not those boundaries are (1) strictly internal to region ${\mathcal{R}}_{i}$ or (2) on the boundary of region ${\mathcal{R}}_{i}$. For strictly internal boundaries we evaluate the resulting surface integral through standard upwind fluxes (i.e., Rusanov fluxes) that utilize the solution on both sides of the interface. However, on the boundary of ${\mathcal{R}}_{i}$ we evaluate the resulting surface integrals only using the trace of the solution that is internal to ${\mathcal{R}}_{i}$. The effect of these choices is to insulate region ${\mathcal{R}}_{i}$ from all elements exterior to ${\mathcal{R}}_{i}$. Solving the algebraic equations in region ${\mathcal{R}}_{i}$ resulting from the above described integration-by-parts, gives us the regionally-implicit prediction. This setup is depicted in the 2D setting in fig. 1 . Algebraically, the system that must be solved in each region has the following form for each $j:{\mathcal{T}}_{j}\in{\mathcal{R}}_{i}$ and for each $k=1,2,\ldots,{M_{\text{P}}}$: $\displaystyle\begin{split}{\underline{R}}_{jk}:=&\underbrace{\left[\int_{{\mathcal{T}}_{j}}\left(\Psi_{k}{\underline{\Psi}}^{T}\right)\Bigl{|}_{t^{n+1}}\,d{\underline{x}}-\int_{t^{n}}^{t^{n+1}}\int_{{\mathcal{T}}_{j}}\Psi_{k,t}\,{\underline{\Psi}}^{T}\,d{\underline{x}}\,dt\right]\,{\underline{\underline{W}}}_{j}}_{\text{(time term)}}\\\ -&\underbrace{\int_{t^{n}}^{t^{n+1}}\int_{{\mathcal{T}}_{j}}{\underline{\nabla}}\Psi_{k}\cdot{\underline{\underline{F}}}\left({\underline{\Psi}}^{T}\,{\underline{\underline{W}}}_{j}\right)\,d{\underline{x}}\,dt}_{\text{(internal flux term)}}\\\ +&\underbrace{\int_{t^{n}}^{t^{n+1}}\oint_{\partial{\mathcal{T}}^{\star}_{j}}\Psi_{k}\,{\underline{{\mathcal{F}}}}\left({\underline{w}}^{h}_{+},{\underline{w}}^{h}_{-};{\underline{n}}\right)\,d{\underline{s}}\,dt}_{\text{(surface flux term)}}-\underbrace{\left[\int_{{\mathcal{T}}_{j}}\left(\Psi_{k}{\underline{\Phi}}^{T}\right)\Bigl{|}_{t^{n}}\,d{\underline{x}}\right]\,{\underline{\underline{Q}}}^{n}_{j}}_{\text{(causal source)}}={\underline{0}},\end{split}$ (6) where $\partial{\mathcal{T}}^{\star}_{j}$ is the part of the boundary of ${\mathcal{T}}_{j}$ that is interior to ${\mathcal{R}}_{i}$ and ${\underline{n}}$ is an outward pointing normal to on $\partial{\mathcal{T}}^{\star}_{j}$. The states ${\underline{w}}^{h}_{\pm}$ are the solution on either sides of the boundary $\partial{\mathcal{T}}^{\star}_{j}$. In this work, the numerical flux, ${\underline{{\mathcal{F}}}}$, is taken to be the Rusanov (sometimes called the local Lax-Friedrichs) flux article:Ru61 . On a Cartesian mesh in ${M_{\text{dim}}}$ dimensions, equation eq. 6 represents a nonlinear algebraic system of size ${M_{\text{eqn}}}\cdot{M_{\text{P}}}\cdot 3^{M_{\text{dim}}}$. In fact, the only portion of the solution to eq. 6 that we actually need to retain is the solution on the central element: ${\mathcal{T}}_{i}$; the solution on the remaining elements in ${\mathcal{R}}_{i}$ will be discarded (again, see fig. 1 ). This prediction step portion of the RIDG scheme is by far the most expensive. Furthermore, the solution to eq. 6 by itself is quite useless: this solution is not even consistent with the original PDE due to the fact that region ${\mathcal{R}}_{i}$ has been insulated from all other regions in the computational domain. However, using this solution as a prediction inside of an appropriate correction step produces a scheme that is both high-order accurate and stable up to CFL numbers that significantly exceed Lax-Wendroff DG and SSP-RKDG RIDG_paper_2019 . ### 2.3 Correction step for RIDG For the correction step we assume a solution of the form eq. 2 and use of a version of equation eq. 3 where the solution inside each of the spacetime integrals is replaced by the predicted solution eq. 5 : $\begin{split}{\underline{Q}}_{i}^{k}\left(t^{n+1}\right)&={\underline{Q}}_{i}^{k}\left(t^{n}\right)+\frac{1}{|{\mathcal{T}}_{i}|}\int_{t^{n}}^{t^{n+1}}\int_{{\mathcal{T}}_{i}}{\underline{\nabla}}\Phi_{k}\cdot{\underline{\underline{F}}}\left({\underline{\Psi}}^{T}\,{\underline{\underline{W}}}_{i}\right)\,d{\underline{x}}\,dt\\\ &-\frac{1}{|{\mathcal{T}}_{i}|}\int_{t^{n}}^{t^{n+1}}\oint_{\partial{\mathcal{T}}_{i}}\Phi_{k}\,{\underline{{\mathcal{F}}}}\left({\underline{w}}^{h}_{+},{\underline{w}}^{h}_{-};{\underline{n}}\right)\,d{\underline{s}}\,dt.\end{split}$ In the above equatoion ${\underline{n}}$ is an outward-pointing normal vector to $\partial{\mathcal{T}}_{i}$, ${\underline{w}}^{h}_{+}$ and ${\underline{w}}^{h}_{-}$ are the states on either side of the boundary $\partial{\mathcal{T}}_{i}$, and ${\underline{{\mathcal{F}}}}$ is the numerical flux. In this work we always make use of the Rusanov (sometimes called the local Lax-Friedrichs) flux article:Ru61 . This portion of the regionally-implicit DG update is fully explicit and computationally inexpensive relative to the prediction step. ## 3 Efficient implementation of RIDG via quasi-quadrature-free Jacobian matrix assembly As described in section 2.2 , the prediction step in the regionally-implicit DG method is the most computationally expensive, since it requires the solution of a nonlinear system of algebraic equations over each region ${\mathcal{R}}_{i}$. Due to the computational complexity of space-time DG methods, efficient matrix assembly is required if high order RIDG methods are to be viable. Therefore, in order to make this step as efficient as possible, we develop in this section a quasi-quadrature-free Jacobian matrix assembly. Quadrature-free schemes have provided efficient alternatives to quadrature in DG schemes for many years Atkins1998 . Depending on the basis choice, they can benefit from reduced computational complexity for high-dimensional problems without sacrificing accuracy. That is, the quadrature free schemes can be designed to inherit the same assumptions about an underlying integrand as do discrete quadrature schemes. Thus, quadrature-free integrals are efficient replacements for standard quadrature schemes, especially when orthogonal bases are used. Matrix assembly for nodal elements can have additional exploits Engsig-Karup2016 . With some work, quarature-free schemes can also be accurately extended to unstructured geometries chan2016 . In the current work we extend this previous work to high-order space-time Jacobian matrix assembly, as required by the Newton method for solving the nonlinear systems inherent to the RIDG prediction step Guthrey2017 ; RIDG_paper_2019 . The prediction step algebraic system given by eq. 6 is typically solved via a Newton’s method, which can be written as ${\underline{W}}^{(m+1)}={\underline{W}}^{(m)}-\left[{{\underline{\underline{J}}}}\left({\underline{W}}^{(m)}\right)\right]^{-1}{\underline{R}}\left({\underline{W}}^{(m)}\right),$ where ${\underline{R}}$ is shorthand for the residual defined in eq. 6 written as a column vector, ${\underline{W}}$ is shorthand for all of the unknown cofficients in region ${\mathcal{R}}_{i}$ written as a column vector, $m$ is the Newton iteration counter, and ${\underline{\underline{J}}}$ is the Jacobian of ${\underline{R}}$ with respect to ${\underline{W}}$. For ease of discussion, let us focus our description of the Jacobian-free implementation on only the internal flux term in the residual and let us consider only a scalar case (${M_{\text{eqn}}}=1$) – these restrictions can easily be removed: $\begin{split}{R}_{jk}&=(\text{time term})-\int_{t^{n}}^{t^{n+1}}\int_{{\mathcal{T}}_{j}}{\underline{\nabla}}\Psi_{k}\cdot{\underline{F}}\left({\underline{\Psi}}^{T}\,{\underline{W}}_{j}\right)\,d{\underline{x}}\,dt\\\ &+(\text{surface flux term})-(\text{causal source}),\end{split}$ (7) for each $j:{\mathcal{T}}_{j}\in{\mathcal{R}}_{i}$ and for each $k=1,2,\ldots,{M_{\text{P}}}$, where ${\underline{W}}_{j}\in\mathbb{R}^{{M_{\text{P}}}}$ are the unknown coefficients in spacetime element ${\mathcal{S}}_{j}$. The Jacobian of this residual with respect to the degrees of freedom ${\underline{W}}$ is $\begin{split}{J}_{jk\ell}&=\frac{\partial}{\partial{W_{\ell}}}(\text{time term})-\int_{t^{n}}^{t^{n+1}}\int_{{\mathcal{T}}_{j}}{\underline{\nabla}}\Psi_{k}\cdot{\underline{F}}^{\prime}\left({\underline{\Psi}}^{T}\,{\underline{W}}_{j}\right)\,{\Psi}_{\ell}\,d{\underline{x}}\,dt\\\ &+\frac{\partial}{\partial{W_{\ell}}}(\text{surface flux term}),\end{split}$ (8) where ${\underline{F}}^{\prime}(q)\cdot{\underline{n}}$ is the flux Jacobian of the hyperbolic conservation law in direction ${\underline{n}}$. In order to assemble the above Jacobian matrix for the Newton iteration, several methods are available. * • Quadrature: We could perform the integrals in eq. 8 via quadrature for every $k,\ell\leq\theta$, where $\theta$ is the number of terms in our space-time Legendre basis. This method is able to take advantage of the sparsity pattern of our block-stencil. The computational complexity of such a method is $\text{timing}\approx\mathcal{O}(\theta^{2}({M_{\text{deg}}}+1)^{{M_{\text{dim}}}+1})\approx\mathcal{O}(({M_{\text{deg}}}+1)^{3{M_{\text{dim}}}+3}),$ (9) which will become incredibly expensive for high-order methods in three dimensions. * • Perturbation: One alternative to direct quadrature is to approximate the Jacobian ${J}_{jk\ell}$ via either finite differences or Gateaux derivatives of the residual eq. 7 ; this is also known as a perturbation method. This method is very simple to implement, as you only need to define the residual for your Newton iteration. However, since the residual needs to be recomputed a number of times equal to the number of degrees of freedom, the computational complexity is roughly the same as eq. 9 . * • Quadrature-free: If the flux function is sufficiently simple (i.e., a polynomial function of the solution), then an alternative to the quadrature and perturbation approaches is to perform exact integrations of the terms needed in the Jacobian; this was the method employed in Guthrey and Rossmanith RIDG_paper_2019 . Unfortunately, this method does not generalize well, even to relatively simple rational flux functions such as those seen in the compressible Euler equations. In this work we consider an alternative to all of these approaches; it is similar to the quadrature-free approach, but applies to general fluxes (i.e., non-polynomial). The idea is this: we project components of the flux Jacobian, ${\underline{F}}^{\prime}(w)$, onto the polynomial basis ${\underline{\Psi}}$: ${\underline{F}}^{\prime}_{\,p}=\left\\{\int_{t^{n}}^{t^{n+1}}\int_{{\mathcal{T}}_{j}}\Psi_{p}\,{\underline{F}}^{\prime}(w)\,d{\underline{x}}\,dt\right\\}\biggl{/}\left\\{\int_{t^{n}}^{t^{n+1}}\int_{{\mathcal{T}}_{j}}\left(\Psi_{p}\right)^{2}\,d{\underline{x}}\,dt\right\\}.$ (10) Using the expansion of ${\underline{F}}^{\prime}(w)$ in the expression for the residual eq. 8 , we obtain $\begin{split}{J}_{jk\ell}&=\frac{\partial}{\partial{W_{\ell}}}(\text{time term})-\sum_{p=1}^{\infty}\left(\int_{t^{n}}^{t^{n+1}}\int_{{\mathcal{T}}_{j}}{\underline{\nabla}}\Psi_{k}\,\Psi_{p}\,{\Psi}_{\ell}\,d{\underline{x}}\,dt\right)\cdot{\underline{F}}^{\prime}_{\,p}\\\ &+\frac{\partial}{\partial{W_{\ell}}}(\text{surface flux term}).\end{split}$ (11) Due to the orthogonality of our basis, there exists some $L$ such that for all $p>L$ and all needed $j$, $k$, and $\ell$, the integrals in the above expression all vanish (see Gupta and Narasimhan Gupta2007 ). That is, there are a limited amount of the integrals in eq. 11 that are actually nonzero. Furthermore, these integral expressions can be precomputed exactly and stored, and quadrature is only needed for the projection eq. 10 . Thus, for each $j,k,\ell$ we may form a list $\alpha_{jk\ell}$ of the indices $p$ such that the associated integral was nonzero and a list ${\underline{\beta}}_{jk\ell p}$ of the precomputed results of the nonzero integrals. Finally, we can then simplify eq. 11 to the following: $\begin{split}{J}_{jk\ell}&=\frac{\partial}{\partial{W_{\ell}}}(\text{time term})-\sum_{p=1}^{\alpha_{jk\ell}}\left({\underline{\beta}}_{jk\ell p}\cdot{\underline{F}}^{\prime}_{\,p}\right)+\frac{\partial}{\partial{W_{\ell}}}(\text{surface flux term}).\end{split}$ 23456$10^{1}$$10^{3}$$10^{5}$$10^{7}$$10^{9}$$3.73$$5.59$$7.04$$8.15$$9.18$$2.1$$4.05$$5.5$$6.6$$7.54$$1.58$$2.94$$4.03$$4.94$$5.73$method order $M_{deg}+1$log timings ($\mu$s)PerturbationQuadratureQuasi-quadrature- free Figure 2: Newton iteration Jacobian matrix assembly time for one 3D+1 space-time prediction for a nonlinear problem, using (1, blue) a theoretical estimate of the cost of a perturbation method obtained by multiplying the timings of the residual computation by the number of degrees of freedom in the Newton iteration (2, red) traditional quadrature routines to compute the integrals in eq. 8, and (3, tan) the quasi-quadrature-free routine discussed in this appendix. Runtimes are $\log_{10}$ of averages of 10 trial runs in microseconds. We see that the quasi-quadrature-free method is the most efficient, and scales better as we increase ${M_{\text{deg}}}$. For ${M_{\text{deg}}}+1=6$, the quasi-quadrature-free method is almost two orders of magnitude faster than the traditional quadrature scheme. This quasi-quadrature-free method maintains the same order of accuracy as the quadrature or perturbation strategies described above, but in practice is orders of magnitude faster, because the space-time quadrature is performed once per degree of freedom per element as opposed to once per square of the degrees of freedom per element. We compare timings of these three methods with ${M_{\text{dim}}}=3$ in Table fig. 2 . At this time we do not have an a priori estimate for the computational complexity of the quasi-quadrature-free method, but we are able to compute a posteriori estimates for each method: $\displaystyle\text{perturbation timings}\approx$ $\displaystyle\;\mathcal{O}(({M_{\text{deg}}}+1)^{11.9})$ $\displaystyle\text{quadrature timings}\approx$ $\displaystyle\;\mathcal{O}(({M_{\text{deg}}}+1)^{11.6})$ $\displaystyle\text{quasi-quadrature-free timings}\approx$ $\displaystyle\;\mathcal{O}(({M_{\text{deg}}}+1)^{9.3})$ We see that the perturbation and traditional quadrature routines exhibit the expected computational complexity described in eq. 9, but that the quasi- quadrature-free method exhibits a significantly lower computational complexity. For brevity and clarity we have omitted the discussion of the temporal derivative terms and surface flux terms. However, we note that the time terms of the matrix assembly of $J_{jk\ell}$ can be precomputed and stored, and no projection is needed during the matrix assembly. Furthermore, the surface terms in general involve a numerical flux, for which a numerical flux Jacobian must be generated or approximated before it is projected. These types of terms affect several blocks of the Jacobian matrix, exactly in accordance to the block-stencil derived from our numerical flux. To review, we have efficiently extended quadrature-free schemes to our space- time prediction procedure needed for our regionally-implicit DG method. This implementation offers incredible speedups to a very computationally expensive method in a way that is extensible to arbitrary problems and relatively simple to implement. We will use this strategy in the remainder of this paper. ## 4 Basic terminology The goal of this work is to extend the convergence studies performed in Guthrey and Rossmanith RIDG_paper_2019 to the case of strong scaling in an HPC setting. Before reporting results in subsequent sections, we describe here some of the key terms needed to analyze the efficiency and accuracy of parallelized DG schemes. We also note that here and throughout the remainder of the paper we consider only the scalar: ${M_{\text{eqn}}}=1$. All the findings in this work generalize to the more general case, but for brevity and clarity we assume for the remainder of this current work that ${M_{\text{eqn}}}=1$. We define the following key terms. mesh: The description of the number of elements. All methods considered in this paper use a uniform Cartesian mesh. • dof : The total number of degrees of freedom. We compute this by multiplying basis element size ($\theta$) by the number of elements (mesh): $\text{\bf dof}(\theta,\text{\bf mesh})=\theta\times\text{\bf mesh}.$ efom: High-order DG methods can be compared to first-order finite volume methods by (among other ways) comparing the equivalent number of the total degrees of freedom. Thus, efom, equivalent first order mesh, indicates what size of a Cartesian mesh with one degree of freedom per cell would have the equivalent number of degrees of freedom for a given mesh and method order. The efom for a method with a basis of size $\theta$ for a ${M_{\text{dim}}}$-dimensional problem is computed using $\text{\bf efom}(\theta,\text{\bf mesh})=\left[\left(\text{\bf dof}\right)^{\frac{1}{{M_{\text{dim}}}}}\right]^{{M_{\text{dim}}}}=\left[\left(\theta\times\text{\bf mesh}\right)^{\frac{1}{{M_{\text{dim}}}}}\right]^{{M_{\text{dim}}}}.$ (12) For example, a $10\times 10$ mesh in 2D with a basis of size $16$ has an efom of $40\times 40$ since it has the equivalent number of degrees of freedom as a $40\times 40$ first order mesh. error: The approximate relative error in the $L^{2}$ norm of our numerical solution. This error is obtain by considering an exact projection $q_{\text{exact}}$ onto an infinite basis, ${\underline{\Phi}}$, on each element: ${\underline{X}}_{i}:=\underset{{\underline{\Phi}}^{\infty}}{\operatorname{proj}}\,q_{\text{exact}}$. Comparing this exact expansion to the numerical solution gives the following element-wise error definition: $e({\underline{x}})\Bigl{|}_{{\mathcal{T}}_{i}}:=\left|\underbrace{\sum\limits_{\ell=1}^{\infty}\Phi_{\ell}({\underline{x}})\,X^{\ell}_{i}}_{\text{exact}}-\underbrace{\sum\limits_{\ell=1}^{M_{\text{basis}}}\Phi_{\ell}({\underline{x}})\,Q^{\ell}_{i}}_{\text{numerical}}\right|.$ Using the orthonormality of the basis functions gives the following $L^{2}$ error: ${\left\|e\right\|}_{L^{2}}^{2}=\sum_{i}\left[\underbrace{\sum\limits_{\ell=1}^{M_{\text{basis}}}\left(X_{i}^{\ell}-Q^{\ell}_{i}\right)^{2}}_{\text{error in coeffs}}+\underbrace{\sum\limits_{\ell=M_{\text{basis}}+1}^{M_{\text{basis}}^{+}}\left(X_{i}^{\ell}\right)^{2}}_{\text{dominant trunc. error}}+\underbrace{\sum\limits_{\ell=M_{\text{basis}}^{+}+1}^{\infty}\left(X_{i}^{\ell}\right)^{2}}_{\text{high- order trunc. error}}\right],$ where $M_{\text{basis}}$ and $M_{\text{basis}}^{+}$ are the number of basis functions in ${\mathbb{Q}}\left({M_{\text{deg}}},{M_{\text{dim}}}\right)$ and ${\mathbb{Q}}\left({M_{\text{deg}}}+1,{M_{\text{dim}}}\right)$, respectively. To sufficient accuracy the error can be approximated by discarding all the terms past $\ell=M_{\text{basis}}^{+}$, resulting in the following approximate relative error: $\text{ \bf error}=\frac{{\left\|e\right\|}_{L^{2}}}{{\left\|q_{\text{exact}}\right\|}_{L^{2}}}\approx\sqrt{\frac{\sum\limits_{i=1}\left[\sum\limits_{\ell=1}^{M_{\text{basis}}}\left(X_{i}^{\ell}-Q^{\ell}_{i}\right)^{2}+\sum\limits_{\ell=M_{\text{basis}}+1}^{M_{\text{basis}}^{+}}\left(X_{i}^{\ell}\right)^{2}\right]}{\sum\limits_{i=1}\left[\sum\limits_{\ell=1}^{M_{\text{basis}}^{+}}\left(X_{i}^{\ell}\right)^{2}\right]}}.$ (13) • aproximate order of accuracy: For the convergence studies performed in this paper, we approximate the convergence rate $M$ using $\text{\bf error}(h)=ch^{M}+{\mathcal{O}}\left(h^{M+1}\right)\quad\Longrightarrow\quad M\approx\frac{\log\left({\text{\bf error}(h_{1})}/{\text{\bf error}(h_{2})}\right)}{\log\left({h_{1}}/{h_{2}}\right)},$ (14) where $h$ is the mesh spacing. runtime: For each experiment we compute the wall clock runtime of each method in seconds. For RIDG methods, the computational effort is dominated by small dense matrix inverses in the prediction step, and thus for a method that requires $N_{t}$ time-steps with a mesh of size $N^{M_{\text{dim}}}$ and a space-time basis of size $\theta_{T}$, we expect the runtime to scale as $\text{\bf runtime}\approx\mathcal{O}\left(N_{t}\cdot N^{M_{\text{dim}}}\cdot(3^{M_{\text{dim}}}\theta_{T})^{3}\right).$ (15) If we use the $\mathcal{Q}$ spacetime basis for the prediction step as discussed in RIDG_paper_2019 and we let $M={M_{\text{deg}}}+1$, then $\theta_{T}=M^{{M_{\text{dim}}}+1}$ and so for explicit time-stepping where $\Delta t\approx\nu\Delta x\approx\nu/N$, where $\nu$ is the CFL number, we get that $\text{\bf runtime}\approx\mathcal{O}\left(27^{{M_{\text{dim}}}}\cdot\nu^{-1}\cdot N^{{M_{\text{dim}}}+1}\cdot M^{3{M_{\text{dim}}}+3}\right).$ (16) RIDG methods have the fortunate property that $\nu^{-1}\approx\mathcal{O}(1)$, whereas methods such as SSP-RKDG experience $\nu^{-1}\approx\mathcal{O}(M^{2})$. quality: When comparing various methods in terms of efficiency, the two most prevalent metrics are runtime and error. One could consider how fast various methods reach a fixed error (time to solution), or vice versa. In this paper we combine these two metrics into a third metric we call quality. We define the quality of the solution as $\text{\bf quality}=-\log\left(\text{\bf error}\times\text{\bf runtime}\right).$ (17) We note that this metric is on a logarithmic scale, so changes of $\pm 1$ for this metric are quite significant. If we assume that RIDG methods of order $M={M_{\text{deg}}}+1$, have an error convergence scaling of $\mathcal{O}(N^{-M})$, then using eq. 16 we expect the quality to scale as $\text{\bf quality}\approx\mathcal{O}\left(\nu+(M-{M_{\text{dim}}}-1)\log(N)-(3{M_{\text{dim}}}+3)\log(M)\right).$ (18) This provides the intuition that high order RIDG methods will have a high quality, offset by increased runtime costs associated with the dense matrix inverses. However, as we consider higher mesh resolutions, the quality of a method of fixed order will be driven higher by the shrinking error. For smaller mesh resolutions, the quality will be adversely dominated by the high runtime cost of the methods. dof/c: This is the number of degrees of freedom per compute core, and is computed as: $\text{\bf dof/c}=\frac{\theta\times{\bf mesh}}{\bf cores}=\frac{\theta\times{\bf mesh}}{\bf tasks\times cores/task},$ (19) where $\theta$ is the size of the DG basis on each element. This quantity is one way to measure the relative compute intensity per core for a given experiment. speedup over single task runtimes: We measure the ratios of runtimes of the single task run versus the multi-task runs to compare how much faster the latter case runs. Ideally we expect that the speedup is equal to the total number of tasks being used. It can be described with the formula $\text{\bf strong speedup}(\text{\bf tasks}=i)=\frac{\text{\bf runtime}(\text{\bf tasks}=1)}{\text{\bf runtime}(\text{\bf tasks}=i)}.$ (20) Note that this definition of speedup compares runtimes for a given set of tasks as opposed to cores. That is, we do not compare runtimes against serial execution. strong efficiency: This is a measure of how our speedup is to the ideal case. It is simply the ratio of actual speedup versus ideal speedup. As we add tasks, the parallel efficiency for a strong scaling test is: $\text{\bf strong efficiency}(\text{\bf tasks}=i)=\frac{\text{\bf speedup}(\text{\bf tasks}=i)-1}{i-1}\times 100\%.$ (21) An efficiency of $100\%$ corresponds to perfect linear speedup scaling with the number of tasks (or cores). An efficiency of $0\%$ means that as we used more tasks, the runtime remained unchanged or increased (we map negative efficiencies to $0\%$). comms: This is an estimate for the total number of MPI communications. It is estimated using the formula $\text{\bf comms}={\bf tasks}\times{\bf timesteps}\times{\bf stages/timestep}\times{\bf comms/stage}.$ (22) We approximate the number of timesteps via the CFL relation $\Delta t=\nu\Delta x\quad\implies\quad\text{\bf timesteps}=\nu^{-1}N,$ (23) where $N$ is the number of mesh elements in each direction. We note that RKDG methods of $M_{deg}=3$ have 9 stages per timestep, and that all RIDG methods have 2 stages (predictor and corrector). There are 8 communications per stage in 2D and 26 in 3D. ## 5 Parallel implementation of RIDG We now briefly discuss the domain decomposition strategy implemented for achieving high performance computing with RKDG and RIDG methods. We consider a Cartesian mesh evenly subdivided into a $N\times N$ (2D) or $N\times N\times N$ (3D) grid of Cartesian submeshes, where $N^{2}$ (2D) or $N^{3}$ (3D) is the number of compute nodes used. Each node (i.e., submesh) is associated with a single MPI task. OpenMP is used for shared-memory parallelization of operations on each Cartesian submesh. MPI is used to perform data communications across the Cartesian submesh interfaces, often called the ghost zone or halo region communications. Submeshes that share interfaces will need to communicate multiple times per timestep, as detailed in the next two sections. ### 5.1 Domain decomposition strategy for RKDG In order to compute the needed fluxes for each stage of RKDG, each Cartesian submesh requires information from face-neighbor submeshes. This means that for RKDG the submeshes must communicate with neighboring submeshes in each stage. The communication latency for this operation can be hidden by using non- blocking MPI routines for the intra-face communications before we perform the volume integrals over the submesh. Each volume integral has a relatively high arithmetic intensity and can be completed using information completely local to each element. Once these volume integrals are computed, we simply wait until intra-face communications are completed, which ideally has already occurred. Then, the fluxes and thus the update for the RK stage can be computed. Lastly, each timestep requires an all-to-all communication to enforce the CFL time-step restriction by communicating the CFL number used by each submesh. ### 5.2 Domain decomposition strategy for RIDG For RIDG the prediction step requires that each given Cartesian submesh has information from vertex-neighbor submeshes, since forming a region for a cell involve the cell’s vertex-neighbors. Latency associated with this communication can be hidden by using non-blocking MPI routines for the interface communications. After these communications are started, we perform the prediction step for elements not on the submesh boundary, as data for their vertex neighbors is located in shared memory. Once the boundary communications are complete, we can and then compute the predictions for boundary elements. We must communicate these boundary predictions as they are needed for the correction step. Again we may use non-blocking routines and finish computing the non-boundary predictions during these communications. Once all predictions are formed and the communications are complete, we continue to the correction step. Just as with RKDG, each timestep requires an all-to-all communication to enforce the CFL time-step restriction by communicating the CFL number used by each submesh. This all-to-all communication can also be performed using a non-blocking MPI routine while the correction update is computed. For nonlinear problems, the prediction step and its associated communications must be repeated until the region residual is driven down to some tolerance. ## 6 Numerical results In order validate the proposed implementation of the regionally-implicit scheme we consider in this section examples 1D, 2D, and 3D. The 1D code runs sufficiently fast that no parallelization is required; we provide the 1D results mostly to demonstrate the efficiency gains of high-order RIDG. Subsequently we consider 2D and 3D examples, all of which are implemented in parallel. ### 6.1 1D results We consider the following 1D linear advection equation: $\begin{cases}q_{,t}+q_{,x}=0&\text{for}\quad(t,x)\in\left[0,T\right]\times\left[0,1\right],\\\ q(t=0,x)=e^{((x-c)^{2}-\omega^{2})^{-1}}&\text{if}\quad(x-c)^{2}<\omega^{2},\\\ q(t=0,x)=0&\text{if}\quad(x-c)^{2}\geq\omega^{2},\end{cases}$ (24) with $c=\frac{1}{2}$ and $\omega=\frac{1}{3}$. Note that the initial condition is clearly $C^{\infty}$ for $\left|x-c\right|<\omega$; furthermore, it can be shown that all derivatives for this function vanish as $(x-c)^{2}\to\omega^{2}$ from the left, and hence the initial condition is actually $C^{\infty}$ over all of $\mathbb{R}$. This level of smoothness is required to test the convergence properties of arbitrarily high order methods. 1D RKDG: (${M_{\text{deg}}}=3$, $\nu=0.1$) --- mesh | dof | $L^{2}$ error (13) | order (14) | runtime (s) | quality (17) $50$ | $200$ | ${3.07}\mathrm{e}{-}4$ | – | ${5.16}\mathrm{e}{-}1$ | $3.8$ $70$ | $280$ | ${7.96}\mathrm{e}{-}5$ | $4.0$ | ${8.95}\mathrm{e}{-}1$ | $4.1$ $120$ | $480$ | ${9.23}\mathrm{e}{-}6$ | $4.0$ | ${2.34}\mathrm{e}{+}0$ | $4.7$ $240$ | $960$ | ${5.77}\mathrm{e}{-}7$ | $4.0$ | ${7.94}\mathrm{e}{+}0$ | $5.3$ 1D RIDG: (${M_{\text{deg}}}=3$, $\nu=0.9$) mesh | dof | $L^{2}$ error (13) | order (14) | runtime (s) | quality (17) $50$ | $200$ | ${6.48}\mathrm{e}{-}4$ | – | ${1.09}\mathrm{e}{+}0$ | $3.2$ $70$ | $280$ | ${1.47}\mathrm{e}{-}4$ | $4.4$ | ${1.07}\mathrm{e}{+}0$ | $3.8$ $120$ | $480$ | ${1.85}\mathrm{e}{-}5$ | $3.8$ | ${2.96}\mathrm{e}{+}0$ | $4.3$ $240$ | $960$ | ${8.31}\mathrm{e}{-}7$ | $4.5$ | ${1.24}\mathrm{e}{+}1$ | $5.0$ 1D RIDG: (${M_{\text{deg}}}=5$, $\nu=0.9$) mesh | dof | $L^{2}$ error (13) | order (14) | runtime (s) | quality (17) $30$ | $180$ | ${1.57}\mathrm{e}{-}5$ | – | ${8.29}\mathrm{e}{-}1$ | $4.9$ $50$ | $300$ | ${8.62}\mathrm{e}{-}7$ | $5.7$ | ${4.21}\mathrm{e}{+}0$ | $5.4$ $70$ | $420$ | ${1.27}\mathrm{e}{-}7$ | $5.7$ | ${4.17}\mathrm{e}{+}0$ | $6.3$ $120$ | $720$ | ${3.18}\mathrm{e}{-}9$ | $6.8$ | ${1.24}\mathrm{e}{+}1$ | $7.4$ 1D RIDG: (${M_{\text{deg}}}=7$, $\nu=0.9$) mesh | dof | $L^{2}$ error (13) | order (14) | runtime (s) | quality (17) $20$ | $160$ | ${1.32}\mathrm{e}{-}6$ | – | ${1.25}\mathrm{e}{+}0$ | $5.8$ $30$ | $240$ | ${4.77}\mathrm{e}{-}8$ | $8.2$ | ${2.86}\mathrm{e}{+}0$ | $6.9$ $50$ | $400$ | ${6.65}\mathrm{e}{-}10$ | $8.4$ | ${1.54}\mathrm{e}{+}1$ | $8.0$ $70$ | $560$ | ${6.63}\mathrm{e}{-}11$ | $6.9$ | ${1.48}\mathrm{e}{+}1$ | $9.0$ 1D RIDG: (${M_{\text{deg}}}=9$, $\nu=0.9$) mesh | dof | $L^{2}$ error (13) | order (14) | runtime (s) | quality (17) $10$ | $100$ | ${4.78}\mathrm{e}{-}6$ | – | ${1.05}\mathrm{e}{+}0$ | $5.3$ $20$ | $200$ | ${4.62}\mathrm{e}{-}9$ | $10.0$ | ${4.22}\mathrm{e}{+}0$ | $7.7$ $30$ | $300$ | ${8.25}\mathrm{e}{-}11$ | $9.9$ | ${9.57}\mathrm{e}{+}0$ | $9.1$ $50$ | $500$ | ${6.11}\mathrm{e}{-}13$ | $9.6$ | ${4.83}\mathrm{e}{+}1$ | $10.5$ 1D RIDG: (${M_{\text{deg}}}=11$, $\nu=0.9$) mesh | dof | $L^{2}$ error (13) | order (14) | runtime (s) | quality (17) $5$ | $60$ | ${1.82}\mathrm{e}{-}4$ | – | ${8.72}\mathrm{e}{-}1$ | $3.8$ $10$ | $120$ | ${5.88}\mathrm{e}{-}8$ | $11.6$ | ${3.52}\mathrm{e}{+}0$ | $6.7$ $20$ | $240$ | ${1.40}\mathrm{e}{-}11$ | $12.0$ | ${1.28}\mathrm{e}{+}1$ | $9.7$ $30$ | $360$ | ${1.21}\mathrm{e}{-}13$ | $11.7$ | ${3.08}\mathrm{e}{+}1$ | $11.4$ Table 1: Convergence and runtime study for 1D SSP-RKDG ${M_{\text{deg}}}=3$ and RIDG methods of various ${M_{\text{deg}}}$ for the problem defined by eq. 24. We list the mesh size, total number of degrees of freedom, approximate $L^{2}$ error compared to the exact solution, the approximate observed order of convergence, the wall clock runtime in seconds, and the quality for each experiment. We see that the RIDG method can use the same CFL $\nu=0.9$ for all values of ${M_{\text{deg}}}$. The highest quality solutions are those produced by the highest order RIDG methods. The results of these convergence tests are shown in table 1 . We notice that the maximum allowable CFL restriction for the RIDG methods is bounded from below by some $\nu_{\text{min}}$; the key point is that $\nu_{\text{min}}$ for RIDG is independent of ${M_{\text{deg}}}$, meaning that the maximum allowable time-step does not degrade as ${M_{\text{deg}}}$ increases, which is in contrast to other explicit DG time-stepping approaches (i.e., RKDG and Lax- Wendroff DG). We observe the expected order of convergence for all methods. We also notice that as we consider higher order RIDG methods, the maximum quality increases. Furthermore, we notice that the ${M_{\text{deg}}}=3$ RKDG method has a higher quality than the ${M_{\text{deg}}}=3$ RIDG method, however higher order RIDG methods each show a continual increase in the solution quality of 1-2 per 2 increase of ${M_{\text{deg}}}$. Note that each increase of 1 in the solution quality indicates a tenfold decrease in the product of error and runtime (see eq. 17 ). We can conclude from these results that while the ${M_{\text{deg}}}=3$ RKDG method has a superior quality metric than the RIDG method of the same order, the RIDG methods of ${M_{\text{deg}}}\geq 5$ have a superior quality metric compared to the RKDG method. Furthermore, higher order RIDG methods have a superior quality per degree of freedom, since as we increase ${M_{\text{deg}}}$ we are able to obtain higher solution qualities with smaller meshes, thus leading to fewer total degrees of freedom. ### 6.2 2D results Next we consider the RIDG scheme in two spatial dimensions. In particular, we test the full parallel implementation of RIDG by running strong scaling studies to test the manycore capabilities of RIDG as compared to RKDG. Strong scaling is achieved by fixing a problem size and efficiently increasing the amount of compute cores used to solve the same problem. For explicit timestepping methods, this translates to considering a fixed mesh size and subdivide the mesh into smaller pieces, where the number of subdivisions is equal to the number of tasks, as depicted in table 2 . A full strong scaling study subdivides the problem among compute cores until the number of elements (also known as cells or zones) per compute core is minimized (ideally to unity). This causes the the ratio of MPI communications to work/computation to increase to some maximum value. The 2D strong scaling studies provided in this section are performed at the Institute for Cyber Enabled Research at Michigan State University, using Intel(R) Xeon(R) Gold 6148 2.40GHz CPUs. 2D Strong Scaling Study --- tasks | mesh/task | subdomain | total cores | elements/core $1$ | $60^{2}$ | $\left[0.000,1.000\right]^{2}$ | $40$ | $90.0$ $4$ | $30^{2}$ | $\left[0.000,0.500\right]^{2}$ | $160$ | $22.5$ $9$ | $20^{2}$ | $\left[0.000,0.33\bar{3}\right]^{2}$ | $360$ | $10.0$ $16$ | $15^{2}$ | $\left[0.000,0.250\right]^{2}$ | $640$ | $5.6$ $25$ | $12^{2}$ | $\left[0.000,0.200\right]^{2}$ | $1000$ | $3.6$ $36$ | $10^{2}$ | $\left[0.000,0.16\bar{6}\right]^{2}$ | $1440$ | $2.5$ Table 2: Strong scaling study for 2D RKDG and RIDG. We scale the methods from a fixed physical domain of $\left[0,1\right]^{2}$ to 36 equally sized subdomains. Each task, which corresponds to one node and thus several processors, solves one subdomain. Above we present the total number of tasks, the mesh size for each subdivision, the physical size of each subdomain, the total number of cores, and the number of mesh elements per core. We see that this experiment drives the number of mesh elements per core down to nearly unity. This scaling study was performed at the Institute for Cyber-Enabled Research at Michigan State University, using Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz CPUs. #### 6.2.1 2D advection For our first test of the scalability of RKDG and RIDG methods in two dimensions, we consider the the 2D scalar advection equation: $\begin{cases}q_{,t}+q_{,x}+q_{,y}=0&(t,{\underline{x}})\in\left[0,T\right]\times\left[0,1\right]^{2},\\\ q(t=0,{\underline{x}})=e^{\left(\|{\underline{x}}-{\underline{c}}\|^{2}-\omega^{2}\right)^{-1}}&\text{if}\quad\|{\underline{x}}-{\underline{c}}\|^{2}<\omega^{2},\\\ q(t=0,{\underline{x}})=0&\text{if}\quad\|{\underline{x}}-{\underline{c}}\|^{2}\geq\omega^{2},\end{cases}$ (25) where ${\underline{c}}=\frac{1}{2}(1,1)^{T}$ and $\omega=\frac{1}{3}$ and we enforce periodic boundary conditions. We perform the strong scaling study defined by table 2 and report the results in table 3. 2D RKDG: (${M_{\text{deg}}}=3$, $\nu=0.05$) --- cores | dof/c | efom | comms | time | eq. 20 | eq. 21 | eq. 17 $40$ | $900.0$ | $190^{2}$ | $9.60{e4}$ | $3.50{e1}$ | $--$ | $--$ | $5.4$ $160$ | $225.0$ | $95^{2}$ | $3.84{e5}$ | $2.63{e1}$ | $1.33\times$ | $11.1\%$ | $5.6$ $360$ | $100.0$ | $63^{2}$ | $8.64{e5}$ | $1.69{e1}$ | $2.07\times$ | $13.4\%$ | $5.8$ $1000$ | $36.0$ | $38^{2}$ | $2.40{e6}$ | $5.01{e1}$ | $0.70\times$ | $0.0\%$ | $5.3$ $1440$ | $25.0$ | $32^{2}$ | $3.46{e6}$ | $6.99{e1}$ | $0.50\times$ | $0.0\%$ | $5.1$ 2D RIDG: (${M_{\text{deg}}}=3$, $\nu=0.7$) cores | dof/c | efom | comms | time | eq. 20 | eq. 21 | eq. 17 $40$ | $1440.0$ | $240^{2}$ | $1.20{e3}$ | $5.40{e1}$ | $1.00\times$ | $--$, | $5.5$ $160$ | $360.0$ | $120^{2}$ | $4.80{e3}$ | $1.56{e1}$ | $3.45\times$ | $81.7\%$ | $6.1$ $360$ | $160.0$ | $80^{2}$ | $1.08{e4}$ | $1.37{e1}$ | $3.94\times$ | $36.8\%$ | $6.1$ $1000$ | $57.6$ | $48^{2}$ | $3.00{e4}$ | $6.89{e0}$ | $7.83\times$ | $28.5\%$ | $6.4$ $1440$ | $40.0$ | $40^{2}$ | $4.32{e4}$ | $6.07{e0}$ | $8.89\times$ | $22.5\%$ | $6.5$ 2D RIDG: (${M_{\text{deg}}}=5$, $\nu=0.7$) cores | dof/c | efom | comms | time | eq. 20 | eq. 21 | eq. 17 $40$ | $3240.0$ | $360^{2}$ | $1.23{e3}$ | $8.71{e2}$ | $1.00\times$ | $--$ | $7.6$ $160$ | $810.0$ | $180^{2}$ | $4.92{e3}$ | $2.82{e2}$ | $3.09\times$ | $69.5\%$ | $8.1$ $360$ | $360.0$ | $120^{2}$ | $1.11{e4}$ | $2.37{e2}$ | $3.68\times$ | $33.5\%$ | $8.2$ $1000$ | $129.6$ | $72^{2}$ | $3.08{e4}$ | $7.55{e1}$ | $11.54\times$ | $43.9\%$ | $8.7$ $1440$ | $90.0$ | $60^{2}$ | $4.43{e4}$ | $4.44{e1}$ | $19.60\times$ | $53.2\%$ | $8.9$ Table 3: Strong scaling study for the 2D RIDG methods with ${M_{\text{deg}}}=3$ and ${M_{\text{deg}}}=5$ on the 2D advection eq. 25. We list the total number of cores, the number of degrees of freedom per core, the equivalent first order mesh as described in eq. 12, the estimated total number of MPI communications as computed by eq. 22, the runtimes in seconds, the speedup over single node performance as computed by eq. 20, the scaling efficiency from a single node eq. 21, and the solution eq. 17. We notice that the total runtime of the RKDG method is initially less than that of the RIDG method, but does not scale well with granularity. After 1000 cores, the runtime of RKDG increases as we use more compute cores, indicating runtime scaling breakdown. That is, the many-node RKDG method exhibits low speedups/efficiencies compared to the single node runtime. The fastest solution was obtained using 360 cores. The RIDG method, while initially more computationally expensive, scales well with the number of added compute resources. As we add more cores, the RIDG methods of ${M_{\text{deg}}}=3,5$ both exhibit monotonic decreases in the overall method runtime, indicating successful runtime scaling to high granularity. As we consider higher granularity, the runtime scaling efficiency for the RIDG method ${M_{\text{deg}}}=3$ begins to drop off. However, the RIDG method ${M_{\text{deg}}}=5$ maintains decent runtime scaling efficiency even at very high granularity. We see that the maximum solution quality obtained by the RKDG method is 5.8, while for RIDG ${M_{\text{deg}}}=3$ the maximum quality is 6.5. For RIDG ${M_{\text{deg}}}=5$ the maximum quality is 8.9. Recalling that solution quality is a logarithmic scale, this means that the RIDG ${M_{\text{deg}}}=5$ solution exhibits a far superior efficiency. Although the RIDG method has a very hefty computational cost compared to RKDG, it is able to scale to much higher levels of granularity. Thus, the RIDG method is able to provide a superior solution quality at scale. #### 6.2.2 2D Burgers To demonstrate the capability of this method to efficiently solve nonlinear problems, we consider here is the 2D scalar Burgers equation: $\begin{cases}q_{,t}+\left(\frac{1}{2}q^{2}\right)_{,x}+\left(\frac{1}{2}q^{2}\right)_{,y}=0&(t,{\underline{x}})\in\left[0,T\right]\times\left[0,1\right]^{2},\\\ q(t=0,{\underline{x}})=\frac{1}{4}(1-\cos(x))(1-\cos(y))&(x,y)\in\left[0,1\right]^{2},\end{cases}$ (26) where we enforce periodic boundary conditions. We perform the same strong scaling study as described in table 2 . Our results are listed in table 4 . These studies are also performed at the Institute for Cyber Enabled Research at Michigan State University, using Intel(R) Xeon(R) Gold 6148 2.40GHz CPUs. The entries of the Jacobian eq. 8 are calculated using the quasi-quadrature- free approach sketched in section 3 . 2D RKDG: (${M_{\text{deg}}}=3$, $\nu=0.05$) --- cores | dof/c | efom | comms | time | eq. 20 | eq. 21 | eq. 17 $40$ | $900.0$ | $190^{2}$ | ${9.60}\mathrm{e}{+}4$ | ${1.90}\mathrm{e}{+}1$ | $--$ | $--$ | $5.5$ $160$ | $225.0$ | $95^{2}$ | ${3.84}\mathrm{e}{+}5$ | ${7.36}\mathrm{e}{+}0$ | $2.58\times$ | $52.7\%$ | $5.9$ $360$ | $100.0$ | $63^{2}$ | ${8.64}\mathrm{e}{+}5$ | ${5.51}\mathrm{e}{+}0$ | $3.45\times$ | $30.6\%$ | $6.0$ $1000$ | $36.0$ | $38^{2}$ | ${2.40}\mathrm{e}{+}6$ | ${1.95}\mathrm{e}{+}1$ | $0.98\times$ | $0.0\%$ | $5.5$ $1440$ | $25.0$ | $32^{2}$ | ${3.46}\mathrm{e}{+}6$ | ${2.91}\mathrm{e}{+}1$ | $0.65\times$ | $0.0\%$ | $5.3$ 2D RIDG: (${M_{\text{deg}}}=3$, $\nu=0.7$) cores | dof/c | efom | comms | time | eq. 20 | eq. 21 | eq. 17 $40$ | $1440.0$ | $240^{2}$ | ${5.48}\mathrm{e}{+}3$ | ${8.89}\mathrm{e}{+}1$ | $1.00\times$ | $--$ | $5.9$ $160$ | $360.0$ | $120^{2}$ | ${2.20}\mathrm{e}{+}4$ | ${2.35}\mathrm{e}{+}1$ | $3.78\times$ | $92.6\%$ | $6.4$ $360$ | $160.0$ | $80^{2}$ | ${4.92}\mathrm{e}{+}4$ | ${1.26}\mathrm{e}{+}1$ | $7.07\times$ | $75.9\%$ | $6.7$ $1000$ | $57.6$ | $48^{2}$ | ${1.37}\mathrm{e}{+}5$ | ${7.94}\mathrm{e}{+}0$ | $11.20\times$ | $42.5\%$ | $6.9$ $1440$ | $40.0$ | $40^{2}$ | ${1.98}\mathrm{e}{+}5$ | ${5.81}\mathrm{e}{+}0$ | $15.31\times$ | $40.9\%$ | $7.1$ Table 4: Strong scaling study for Burgers eq. 26 using the RIDG methods with ${M_{\text{deg}}}=3$ and RKDG methods ${M_{\text{deg}}}=3$, as defined by table 2. We list the total number of cores, the number of degrees of freedom per core, the equivalent first order mesh as described in eq. 12, the estimated total number of MPI communications as computed by eq. 22, the runtimes in seconds, the speedup over single node performance as computed by eq. 20, the scaling efficiency from a single node eq. 21, and the solution quality eq. 17. We notice that the RKDG method requires many more total halo-region MPI communications than the RIDG method. We also notice that the total runtime of the RKDG method is initially less than that of the RIDG method, but does not scale well with granularity. After 1000 cores, the runtime of RKDG increases as we use more compute cores, indicating runtime scaling breakdown. The fastest solution was obtained using 360 cores. The RIDG method, while initially more computationally expensive, scales well with the number of added compute resources. As we add more cores, the RIDG method exhibits monotonic decreases in the overall method runtime, indicating successful runtime scaling to high granularity. The fastest runtime was obtained at 1440 cores. The many- node RKDG method exhibits low speedups/efficiencies compared to the single node runtime. We notice that at 1000 cores, the speedups drop below $1$ and the efficiencies drop to $0\%$, indicating runtime scaling breakdown. The many-node RIDG method exhibits much larger speedups, and thus higher scaling efficiencies. As we consider higher granularity, the runtime scaling efficiency for the RIDG method maintains decent runtime scaling efficiency even at very high granularity. As we consider various levels of granularity, the maximum solution quality obtained by the RKDG method is 6.0, For RIDG the maximum quality is 7.1. We conclude that the RKDG method does not experience runtime scaling to high granularity for this nonlinear problem in two dimensions, but the RIDG method ${M_{\text{deg}}}=3$ experiences good runtime scaling to high granularity, providing an improvement in the maximum possible solution quality. This is despite the additional iterations that the RIDG method must perform for nonlinear problems greatly increase the computational cost of the method and the total number of MPI communications. ### 6.3 3D results In this section we test the suitability of the RIDG method at scale for solving 3D problems. The method efficiency is again compared to the Runge- Kutta DG methods. In particular we consider the following 3D linear advection equation: $\begin{cases}q_{,t}+q_{,x}+q_{,y}+q_{,z}=0&\text{for}\quad\left(t,{\underline{x}}\right)\in\left[0,T\right]\times\left[0,1\right]^{3},\\\ q(t=0,{\underline{x}})=e^{(\|{\underline{x}}-{\underline{c}}\|^{2}-\omega^{2})^{-1}}&\text{if}\quad\|{\underline{x}}-{\underline{c}}\|^{2}<\omega^{2},\\\ q(t=0,{\underline{x}})=0&\text{if}\quad\|{\underline{x}}-{\underline{c}}\|^{2}\geq\omega^{2},\end{cases}$ (27) with ${\underline{c}}=\frac{1}{2}(1,1,1)^{T}$ and $\omega=\frac{1}{3}$. Note that the initial condition is clearly $C^{\infty}$ for $\left|x-c\right|<\omega$; this is the multidimensional analog of eq. 24 . 3D Strong Scaling Study --- tasks | mesh/task | subdomain | total cores | elements/core $1$ | $48\times 48\times 48$ | $\left[0.000,1.000\right]^{2}$ | $56$ | $1974.9$ $27$ | $16\times 16\times 16$ | $\left[0.000,0.33\bar{3}\right]^{2}$ | $1512$ | $73.1$ $216$ | $8\times 8\times 8$ | $\left[0.000,0.16\bar{6}\right]^{2}$ | $12096$ | $9.1$ $512$ | $6\times 6\times 6$ | $\left[0.000,0.125\right]^{2}$ | $28672$ | $3.9$ Table 5: Strong scaling study for 3D RKDG and RIDG. We scale the methods from a fixed physical domain of $\left[0,1\right]^{3}$ to 512 equally sized subdomains. Each task, which corresponds to one node, solves one subdomain. Above we present the total number of tasks, the mesh size for each subdivision, the physical size of each subdomain, the total number of cores, and the number of mesh elements per core. We see that this experiment scales the problem granularity down to about 4 elements per core. We also display the errors for each method scaled. These runs were performed on Livermore Computing clusters at at Lawrence Livermore National Laboratory using Intel Xeon CLX-8276L CPUs with Omni-Path interconnects. 3D RKDG: (${M_{\text{deg}}}=3$, $\nu=0.05$, $L^{2}\,\text{\bf error}\,\eqref{eqn:l2relerror}={1.36}\mathrm{e}{-}07$) --- cores | dof/c | efom eq. 12 | comms | time | eq. 20 | eq. 21 | eq. 17 $56$ | $126390.9$ | $192^{3}$ | ${2.59}\mathrm{e}{+}5$ | ${4.92}\mathrm{e}{+}3$ | $--$ | $--$ | $4.90$ $1512$ | $4681.1$ | $64^{3}$ | ${7.00}\mathrm{e}{+}6$ | ${2.15}\mathrm{e}{+}2$ | $22.86\times$ | $84.1\%$ | $6.26$ $12096$ | $585.1$ | $32^{3}$ | ${5.60}\mathrm{e}{+}7$ | ${2.89}\mathrm{e}{+}1$ | $170.53\times$ | $78.8\%$ | $7.13$ $28672$ | $246.9$ | $24^{3}$ | ${1.33}\mathrm{e}{+}8$ | ${1.54}\mathrm{e}{+}1$ | $319.98\times$ | $62.4\%$ | $7.41$ 3D RIDG: (${M_{\text{deg}}}=3$, $\nu=0.6$, $L^{2}\,\text{\bf error}\,\eqref{eqn:l2relerror}={1.53}\mathrm{e}{-}07$) cores | dof/c | efom eq. 12 | comms | time | eq. 20 | eq. 21 | eq. 17 $56$ | $126390.9$ | $192^{3}$ | ${4.32}\mathrm{e}{+}3$ | ${8.24}\mathrm{e}{+}3$ | $--$ | $--$ | $4.72$ $1512$ | $4681.1$ | $64^{3}$ | ${1.17}\mathrm{e}{+}5$ | ${3.28}\mathrm{e}{+}2$ | $25.09\times$ | $92.7\%$ | $6.13$ $12096$ | $585.1$ | $32^{3}$ | ${9.33}\mathrm{e}{+}5$ | ${4.43}\mathrm{e}{+}1$ | $186.17\times$ | $86.1\%$ | $7.00$ $28672$ | $246.9$ | $24^{3}$ | ${2.21}\mathrm{e}{+}6$ | ${2.19}\mathrm{e}{+}1$ | $377.10\times$ | $73.6\%$ | $7.31$ 3D RIDG: (${M_{\text{deg}}}=5$, $\nu=0.6$, $L^{2}\,\text{\bf error}\,\eqref{eqn:l2relerror}={4.97}\mathrm{e}{-}11$) cores | dof/c | efom eq. 12 | comms | time | eq. 20 | eq. 21 | eq. 17 $1512$ | $15798.9$ | $96^{3}$ | $1.17{e5}$ | $2.76{e4}$ | $--$ | $--$ | $7.64$ $12096$ | $1974.9$ | $48^{3}$ | $9.33{e5}$ | $3.76{e3}$ | $7.34\times$ | $90.6\%$* | $8.51$ $28672$ | $833.1$ | $36^{3}$ | $2.21{e6}$ | $1.57{e3}$ | $17.54\times$ | $92.1\%$* | $8.88$ Table 6: Strong scaling study for the 3D advection eq. 27 as defined by table 5. We list the total number of cores, the number of degrees of freedom per core, the equivalent first order mesh as described in eq. 12, the estimated total number of MPI communications as computed by eq. 22, the runtimes in seconds, the speedup over single node performance as computed by eq. 20, the scaling efficiency from a single node eq. 21, and the solution eq. 17. In this work we are most interested in the quality metric, as this combines error and runtime into a single metric. In this table, we see that the high order RIDG methods exhibit the highest quality. Note* the scaling efficiencies eq. 21 for the ${M_{\text{deg}}}=5$ RIDG scheme are computed relative to the 27 node run as opposed to the single node case. To test the scalability of the RIDG method for such a problem, we consider the strong scaling study defined in table 5 . As per our scaling studies in the earlier sections, our goal is to shrink the number of degrees of freedom per core. This maximizes the ratio of communication overhead to task-localized work. In three spatial dimensions, the RIDG prediction stage is a four- dimensional problem and thus there is plenty of task-local work to hide latency, as higher dimensionality exponentially increases the number of elements in each DG basis. For RIDG, this greatly increases the cost of the region matrix inversions. Despite this our predicted quality metric eq. 18 increases as we consider higher order methods. This is demonstrated in the results shown in table 6 . These scaling studies were performed at the Livermore Computing Center at Lawrence Livermore National Laboratory using Intel Xeon E5-2695 v4 2.1GHz CPUs with Omni-Path interconnects. The entries of the Jacobian eq. 8 are calculated using the quasi-quadrature-free approach sketched in section 3. First we notice that both the RKDG and RIDG methods exhibit better scaling efficiency for this 3D problem than they do for the 2D problem. The computational complexity of DG increases the ratio of MPI-communication and OpenMP overhead to parallelized work, which also increases the the computational work available to hide the communication latencies, despite the increased number of communications per time-step. This is indicated by the “degrees of freedom per core" metric eq. 19 shown in table 6 compared to the same metric in table 4 . Even so, the RKDG method requires many more total halo-region MPI communications than the RIDG method of any order. This is due to the fact that the RIDG methods of ${M_{\text{deg}}}=3,5$ exhibit the enhanced CFL restriction $\nu=0.6$, and thus all can take a relatively large timestep compared to the RKDG method. Furthermore, we notice that the total runtime of the RKDG method is half of that of the RIDG method of the same ${M_{\text{deg}}}$. Since their errors as per table 6 are similar in magnitude, this implies the RKDG method ${M_{\text{deg}}}=3$ has a higher quality than the RIDG method ${M_{\text{deg}}}=3$. For the higher order RIDG method, ${M_{\text{deg}}}=5$, we observe that the runtimes increase by a factor of 2 orders of magnitudes when compared to the methods of ${M_{\text{deg}}}=3$. However, the errors as seen in table 5 are 3 orders of magnitude smaller for ${M_{\text{deg}}}=5$. Thus, we find that the higher order methods have a higher quality metric than the methods of ${M_{\text{deg}}}=3$, and are thus more efficient. This is achieved in part by the ability of the RIDG method of ${M_{\text{deg}}}=5$ to maintain a strong scaling efficiency of around $90\%$ above 28,000 cores, as the lower order schemes begin to drop off in efficiency. Note that in 3D the quality metric for ADER-DG methods are approximately $+0.12$ higher than the quality metric for RKDG Dumbser2018 . We conclude that due to their high quality metric, the high order RIDG method with ${M_{\text{deg}}}=5$ scales very efficiently for 3D problems. ## 7 Conclusion In this paper we have explored the scalability of the regionally implicit discontinuous Galerkin (RIDG) method, which was previously shown to have favorable convergence properties for solving hyperbolic conservation laws RIDG_paper_2019 . We compared the results to that of the extremely popular strong-stability-preserving Runge-Kutta DG (SSP-RKDG) method. We performed these comparisons for a two-dimensional linear problem, a two-dimensional nonlinear problem, and a three-dimensional linear problem which served as a toy problem related to the Relativistic Vlasov Maxwell system. We demonstrated efficient strong-scaling properties of the RIDG DG methods, we are able to maintain vertex-neighbor stenciling and communications while taking highly enhanced time-step sizes. We also demonstrated the incredible boost in efficiency of using the quasi-quadrature free strategy for space-time matrix assembly. This strategy reduces the computational complexity of space-time matrix assembly down to what we would expect from purely spatial matrix- assembly. This is critical to the viability of RIDG methods. The work used to hide the aforementioned communication and collective latencies itself benefits from intra-node (shared memory) parellelism via OpenMP openmp ; Architecture2018 . The RIDG method achieves decent intra-node efficiency simply by distributing the “regions” over the number of shared memory compute cores, similar to the ADER-DG parellelism strategy Fambri2018 . The result is that in the limit of minimum granularity (minimum cells per core), the work done by each core is dominated by formation of the space-time region Jacobian and small linear system solves. In contrast, the RKDG method distributes the computation of the flux quadrature and the volume integral quadrature to form a residual. In the the limit of minimum granularity, this means that each CPU core computes the spatial component of the residual of a cell. Thus, the RIDG method distributes a far greater load of work compared to the RKDG method, which leads to superior intra-node scaling, at the cost of incredibly increased computational cost per time step. Further, the superior stability properties of the RIDG method allowed larger time steps than what the RKDG method was able to take. The predictor-corrector strategy of the RIDG method means that the method communicates twice per time- step compared to once per stage (many times per time-step). Thus, the RIDG method communicates across domain decomposition pseudo-boundaries (halo regions) nearly two orders of magnitudes fewer times than the RKDG method. In addition, the amount of work used to hide the latency of the communications is much greater for the RIDG method than for the RKDG method. In the nonlinear case, this advantage is diminished as the RIDG method must communicate once per Newton iteration during the prediction step, but the RIDG method still exhibits a superior ability to hide communication latency behind the Newton iteration matrix solves. Furthermore, the reduced number of time-steps means that there are fewer all- to-all collective communications of the maximum wavespeed, an operation required of explicit time integrators. The RIDG method is able to compute the maximum wavespeed after the prediction step, and is thus able to hide the latency associated with this collective using the corrector step. Lastly, reduction in parallel scaling efficiency is inevitable, as a growing percentage of each subdomain must be copied into contiguous MPI buffers and a growing number of compute resources must be synchronized for maximum efficiency. However, if the ratio of time spent updating the solution over the time spent performing these copies is maximized, as is with RIDG, then higher parallel scaling can be maintainted. Due to excellent inter-node and intra-node parallelism, the RIDG method offers excellent scalability for explicit time-stepping of hyperbolic conservation laws, and is able to demonstrate incredible performance with very high-order methods at high dimensionality. ###### Acknowledgements. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. The authors would like to thank Philipp Grete and Forrest Glines for their helpful conversations. ## Disclaimer This document was prepared as an account of work sponsored by an agency of the United States government. Neither the United States government nor Lawrence Livermore National Security, LLC, nor any of their employees makes any warranty, expressed or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights. Reference herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise does not necessarily constitute or imply its endorsement, recommendation, or favoring by the United States government or Lawrence Livermore National Security, LLC. The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States government or Lawrence Livermore National Security, LLC, and shall not be used for advertising or product endorsement purposes. ## References * (1) Atkins, H.L., Shu, C.W.: Quadrature-free implementation of discontinuous Galerkin method for hyperbolic equations. AIAA Journal 36(5), 775–782 (1998). DOI 10.2514/3.13891. URL http://arc.aiaa.org/doi/10.2514/2.436 * (2) Chan, J., Wang, Z., Modave, A., Remacle, J.F., Warburton, T.: GPU-accelerated discontinuous Galerkin methods on hybrid meshes. J. Comput. Physics 318(div), 142–168 (2016). DOI 10.1016/j.jcp.2016.04.003 * (3) Cockburn, B., Shu, C.W.: The Runge–Kutta discontinuous Galerkin method for conservation laws V: Multidimensional systems. J. Comput. Physics 141(2), 199–224 (1998) * (4) Dagum, Leonardo and Menon, R.: OpenMP: an industry standard API for shared-memory programming. Computational Science & Engineering, IEEE 5, 46–55 (1998) * (5) Dumbser, M., Fambri, F., Tavelli, M., Bader, M., Weinzierl, T.: Efficient implementation of ADER discontinuous Galerkin schemes for a scalable hyperbolic PDE engine. Axioms pp. 1–26 (2018). DOI 10.3390/axioms7030063. URL http://arxiv.org/abs/1808.03788 * (6) Dumbser, M., Munz, C.D.: Building blocks for arbitrary high order discontinuous Galerkin schemes. J. Sci. Comput. 27, 215–230 (2006) * (7) Dumbser, M., Zanotti, O., Hidalgo, A., Balsara, D.S.: ADER-WENO finite volume schemes with space-time adaptive mesh refinement. J. Comput. Physics 248, 257–286 (2013). DOI 10.1016/j.jcp.2013.04.017. URL http://dx.doi.org/10.1016/j.jcp.2013.04.017 * (8) Engsig-Karup, A.P., Eskilsson, C., Bigoni, D.: A stabilised nodal spectral element method for fully nonlinear water waves. J. Comput. Physics 318, 1–21 (2016). DOI 10.1016/j.jcp.2016.04.060 * (9) Fambri, F., Dumbser, M., Köppel, S., Rezzolla, L., Zanotti, O.: ADER discontinuous Galerkin schemes for general-relativistic ideal magnetohydrodynamics. Mon. Not. R. Astron. Soc. 477(4), 4543–4564 (2018). DOI 10.1093/mnras/sty734 * (10) Fischer, P.F., Heisey, K., Min, M.: Scaling Limits for PDE-Based Simulation. In: 22nd AIAA Computational Fluid Dynamics Conference, 22-26 June 2015, Dallas, TX, pp. 1–10 (2015) * (11) Gabriel, E., Fagg, G.E., Bosilca, G., Angskun, T., Dongarra, J.J., Squyres, J.M., Sahay, V., Kambadur, P., Barrett, B., Lumsdaine, A., Castain, R.H., Daniel, D.J., Graham, R.L., Woodall, T.S.: Open MPI: Goals, concept, and design of a next generation MPI implementation. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 3241, 97–104 (2004). DOI 10.1007/978-3-540-30218-6-19 * (12) Gassner, G., Dumbser, M., Hindenlang, F., Munz, C.D.: Explicit one-step time discretizations for discontinuous Galerkin and finite volume schemes based on local predictors. J. Comput. Physics 230, 4232–4247 (2011) * (13) Gottlieb, S., Shu, C.W.: Total variation diminshing Runge-Kutta schemes. Math. of Comput. 67, 73–85 (1998) * (14) Gottlieb, S., Shu, C.W., Tadmor, E.: Strong stability-preserving high-order time discretization methods. SIAM Rev. 43(1), 89–112 (2001) * (15) Gupta, M., Narasimhan, S.G.: Legendre polynomials Triple Product Integral and lower-degree approximation of polynomials using Chebyshev polynomials. Tech. Rep. CMU-RI-TR-07-22, Carnegie Mellon University (2007) * (16) Guthrey, P.: Regionally implicit discontinuous Galerkin methods for solving the relativistic Vlasov-Maxwell system. Ph.D. thesis, Iowa State University (2017) * (17) Guthrey, P., Rossmanith, J.: The regionally implicit discontinuous Galerkin method: Improving the stability of DG-FEM. SIAM J. Numer. Analysis 57(3), 1263–1288 (2019). DOI 10.1137/17M1156174 * (18) INTERTWinE Consortium: Best Practice Guide to Hybrid MPI + OpenMP Programming (2017). URL http://www.intertwine-project.eu/sites/default/files/images/INTERTWinE_Best_Practice_Guide_MPI%2BOpenMP_1.1.pdf * (19) Klaij, C., van Der Vegt, J., van Der Ven, H.: Space-time discontinuous Galerkin method for the compressible Navier-Stokes equations. J. Comput. Physics 217(2), 589–611 (2006) * (20) OpenMP Architecture Review Board: OpenMP Application Programming Interface (2018). URL https://www.openmp.org/wp-content/uploads/OpenMP-API-Specification-5.0.pdf * (21) Qiu, J., Dumbser, M., Shu, C.W.: The discontinuous Galerkin method with Lax-Wendroff type time discretizations. Comput. Methods Appl. Mech. Eng. 194, 4528–4543 (2005). DOI 10.1016/j.cma.2004.11.007 * (22) Rusanov, V.: Calculation of interaction of non-steady shock waves with obstacles. J. Comp. Math. Phys. USSR 1, 267–279 (1961) * (23) Sudirham, J., van Der Vegt, J., van Damme, R.: Space-time discontinuous Galerkin method for advection-diffusion problems on time-dependent domains. Appl. Numer. Math. 56(12), 1491–1518 (2006) * (24) Zanotti, O., Fambri, F., Dumbser, M., Hidalgo, A.: Space–time adaptive ADER discontinuous Galerkin finite element schemes with a posteriori sub-cell finite volume limiting. Computers & Fluids 118, 204 – 224 (2015). DOI https://doi.org/10.1016/j.compfluid.2015.06.020
16k
arxiv_papers
2101.01233
# Perelman’s entropy on ancient Ricci flows Zilu Ma and Yongjia Zhang ###### Abstract In [ZY2], the second author proved Perelman’s assertion, namely, for an ancient Ricci flow with bounded and nonnegative curvature operator, bounded entropy is equivalent to noncollapsing on all scales. In this paper, we continue this discussion. It turns out that the curvature operator nonnegativity is not a necessary condition, and we need only to assume a consequence of Hamilton’s trace Harnack. Furthermore, we show that this condition holds for steady Ricci solitons with nonnegative Ricci curvature. ## 1 Introduction The entropy formula for the Ricci flow was introduced by Perelman [Per02], with the help of which he proved the no local collapsing theorem and some other famous theorems, such as the pseudolocality theorem. Along with the entropy formula, Perelman also invented the reduced geometry for the Ricci flow. Both of these two monotonicity formulas have ever since been central techniques in this field. At the first glance, it appears that Perelman’s entropy is neatly formulated, whereas the reduced geometry deals with a subsolution—rather than a solution—to the conjugate heat equation. Nevertheless, the analysis of the reduced geometry needs only the local geometric information, whereas the analysis of Perelman’s entropy requires one to handle the heat equation, whose solution is sensitive to the global geometry. In view of these facts, it is not surprising that the reduced geometry is more tractable in the localization. Indeed, in the construction of the Ricci flow with surgeries, Perelman chiefly applied the reduced geometry. On the other hand, it turns out that many theorems proved by the reduced geometry method can also be proved by Perelman’s entropy method. To begin with, Zhang [ZQ10] showed that in the proof of the Poincaré conjecture, the reduced geometry can be replaced with Perelman’s entropy. Furthermore, under the condition of _either_ Type I curvature bound _or_ bounded and nonnegative curvature operator, a noncollapsed Ricci flow always has an asymptotic shrinking soliton (c.f. [Per02] and [N10]). The original proofs of these asymptotic soliton theorems applied the reduced geometry method, yet they can both be proved by implementing Perelman’s entropy method (c.f. [CZ11] and [ZY2]). Hereby we would like to point out Bamler’s recent groundbreaking works [Bam20a]—[Bam20c], in which he largely refined the analysis of the Nash entropy (the time average of Perelman’s entropy) and proved a nice structure theorem for singularity models of the Ricci flow. In this paper, we continue the discussion initiated in [ZY2], where the second author proved Perelman’s assertion (section 11 in [Per02]) > We impose one more requirement on the solutions; namely, we fix some > $\kappa>0$ and require that $g_{ij}(t)$ to be $\kappa$-noncollapsed on all > scales… _It is not hard to show that this requirement is equivalent to a > uniform bound on the entropy $S$_, defined as in 5.1 using an arbitrary > fundamental solution to the conjugate heat equation. In this assertion Perelman assumes bounded and nonnegative curvature operator. It is still interesting to ask whether the nonnegative curvature operator condition is necessary. In the present paper, we will show that this condition can indeed be relaxed. To present the statements of our main results, let us recall the several definitions. Given a complete Ricci flow $(M,g(t))_{t\in I}$, for any $x,y\in M,s,t\in I,s<t,$ we denote by $K(x,t\,|\,y,s)$ the minimal heat kernel coupled with the flow, namely, $\displaystyle(\partial_{t}-\Delta_{g(t)})K(\cdot,t\,|\,y,s)=0,$ $\displaystyle\quad\lim_{t\downarrow s}K(x,t\,|\,y,s)=\delta_{y}(x),$ $\displaystyle(-\partial_{s}-\Delta_{g(s)}+R(\cdot,s))K(x,t\,|\,\cdot,s)=0,$ $\displaystyle\quad\lim_{s\uparrow t}K(x,t\,|\,y,s)=\delta_{x}(y).$ See, for example, [CCGG+10, Theorem 24.40] for the existence of such heat kernels. Following [Per02], we denote by $\Box:=\partial_{t}-\Delta_{g(t)}$ the heat operator coupled with the Ricci flow $(M,g(t))_{t\in I}$, and by $\Box^{*}=-\partial_{t}-\Delta_{g(t)}+R_{g(t)}$ the conjugate heat operator. It follows from the Stokes theorem that for any $u,v\in C^{2}_{c}(M\times I)$, i.e., $C^{2}$ functions over some interval $I\subset[0,T]$ with compact supports, we have $\frac{d}{dt}\int_{M}uv\,dg_{t}=\int_{M}(\Box u)v-u(\Box^{*}v)\,dg_{t},$ where we denote by $dg_{t}$ the volume form induced by the metric $g(t)$. ###### Definition 1.1. Let $(M,g(t))_{t\in[0,T]}$ be a complete Ricci flow. Let $u(x,t)=K(x_{0},t_{0}\,|\,x,t)=(4\pi\tau)^{-\frac{n}{2}}e^{-f(x,t)}$ be the conjugate heat kernel based at $(x_{0},t_{0})\in M\times(0,T]$, where $\tau=t_{0}-t\in(0,t_{0}]$ is the backward time. Then Perelman’s entropy and the Nash entropy based at $(x_{0},t_{0})$ are respectively defined as $\displaystyle\mathcal{W}_{(x_{0},t_{0})}(\tau)$ $\displaystyle=$ $\displaystyle\int_{M}\Big{(}\tau\big{(}|\nabla f|^{2}+R\big{)}+f-n\Big{)}u\,dg_{t},$ (1.2) $\displaystyle\mathcal{N}_{(x_{0},t_{0})}(\tau)$ $\displaystyle=$ $\displaystyle\int_{M}fu\,dg_{t}-\frac{n}{2},$ (1.3) for all $\tau\in(0,t_{0}]$. Perelman’s well-known monotonicity formula indicates that, unless $(M,g(t))$ is the static Euclidean space, in which case $\mathcal{W}_{(x_{0},t_{0})}(\tau)\equiv 0$, $\mathcal{W}_{(x_{0},t_{0})}(\tau)$ is always negative and monotonically decreasing in $\tau$. Furthermore, Perelman’s entropy always converges to zero at its base time, namely, $\displaystyle\lim_{\tau\rightarrow 0}\mathcal{W}_{(x_{0},t_{0})}(\tau)=0.$ (1.4) The Nash entropy bears the same monotonicity property and satisfies (1.4), since it is known to be the time average of Perelman’s entropy $\displaystyle\mathcal{N}_{(x_{0},t_{0})}(\tau)=\frac{1}{\tau}\int_{0}^{\tau}\mathcal{W}_{(x_{0},t_{0})}(\eta)\,d\eta.$ (1.5) In Perelman’s assertion, he mentioned the notions of _bounded entropy_ and _noncollapsing_. They precisely mean the following. ###### Definition 1.6. Let $(M,g(t))_{t\in(-\infty,0]}$ be an ancient solution. Then $g(t)$ is said to have bounded entropy, if $\displaystyle W:=\inf_{(x,t)\in M\times(-\infty,0];\eta>0}\mathcal{W}_{(x,t)}(\eta)>-\infty,$ (1.7) where the quantity $W$ is called the _entropy bound_. ###### Definition 1.8. Let $(M^{n},g(t))$ be a complete Ricci flow. Then, $g(t)$ is called weakly $\kappa$-noncollapsed, if, for any $r>0$, it holds that $\operatorname{Vol}_{g(t)}\big{(}B_{g(t)}(x,r)\big{)}\geq\kappa r^{n}$ whenever $|{\operatorname{Rm}}|(y,s)\leq r^{-2}$ for any $(y,s)\in B_{g(t)}(x,r)\times[t-r^{2},t]$. $g(t)$ is called strongly $\kappa$-noncollapsed, if, for any $r>0$, it holds that $\operatorname{Vol}_{g(t)}\big{(}B_{g(t)}(x,r)\big{)}\geq\kappa r^{n}$ whenever $R_{g(t)}\leq r^{-2}$ on $B_{g(t)}(x,r)$. Here $B_{g(t)}(x,r)$ stands for the $g(t)$-geodesic ball centered at $x$ with radius $r$, and $\operatorname{Vol}_{g(t)}$ stands for the Riemannian volume with respect to $g(t)$. Next, we review several definitions involved with Perelman’s reduced geometry. Suppose that $(M,g(t))_{t\in[-T,0]}$ is a complete Ricci flow. Fix a base point $(p_{0},t_{0})\in M\times(-T,0].$ For any piecewisely smooth curves $\gamma:[0,\tau]\to M$ with $\gamma(0)=p$ and $t_{0}-\tau\geq-T,$ we define $\mathcal{L}(\gamma):=\int_{0}^{\tau}\sqrt{s}\left(R_{g(t_{0}-s)}+|\dot{\gamma}|^{2}_{g(t_{0}-s)}\right)(\gamma(s))\,ds.$ Then, let $L(x,\tau):=L_{(p_{0},t_{0})}(x,\tau):=\inf_{\gamma}\mathcal{L}(\gamma),$ where the infimum is taken over all piecewisely smooth curve $\gamma:[0,\tau]\to M$ with $\gamma(0)=p$ and $\gamma(\tau)=x$, and $\ell(x,\tau):=\ell_{(p_{0},t_{0})}(x,\tau):=\frac{1}{2\sqrt{\tau}}L_{(p_{0},t_{0})}(x,\tau)$ is called the _reduced distance_ based at $(p_{0},t_{0}).$ Perelman’s _reduced volume_ based at $(p_{0},t_{0})$ is defined as $V_{(p_{0},t_{0})}(\tau):=(4\pi\tau)^{-n/2}\int_{M}\exp\left(-\ell_{(p_{0},t_{0})}(\cdot,\tau)\right)\,dg_{t_{0}-\tau},$ for all $\tau\in(0,T+t_{0}]$. $V_{(p_{0},t_{0})}(\tau)$ is known to be monotonically decreasing in $\tau$. The main theorem of this paper is the following. ###### Theorem 1.9. Let $(M^{n},g(t))_{t\in(-\infty,0]}$ be an ancient solution to the Ricci flow with bounded curvature within each compact time interval. Let $\ell$ be the reduced distance based at $(p,0)$, where $p\in M$ is a fixed point. Assume that there exists $C>0$, such that the following hold. 1. (1) $|{\operatorname{Rm}}_{g(t)}|\leq CR_{g(t)}$ for all $t\in(-\infty,0]$. 2. (2) $\displaystyle|\nabla\ell|^{2}+R\leq\frac{C\ell}{\tau}$ for all $\tau\in(0,\infty)$, where $\tau=-t$ is the backward time. Then $g(t)$ is $\kappa$-noncollapsed if and only if it has bounded entropy; the quantity $\kappa$ and the entropy bound are mutually dependent on. _Remarks_ : 1. 1. Because of the assumptions above, the notions of strong noncollapsing and weak noncollapsing are equivalent. 2. 2. Assumption (2) is implied by Hamilton’s trace Harnack estimate [Ha1]: $\displaystyle\frac{\partial R}{\partial t}-2\langle X,\nabla R\rangle+2\operatorname{Ric}(X,X)\geq 0$ (1.10) for any smooth vector field $X$ on $M$. See the argument in Section 7.2 of [Per02]. In [ZY2], the second author used the nonnegativity of the curvature operator for two reasons. Let $(M,g(\tau))_{\tau\in[0,\infty)}$ be an ancient Ricci flow with bounded and nonnegative curvature operator, where $\tau$ is the backward time, then 1. (1) Hamilton’s trace Harnack implies assumption (2) in the statement of Theorem 1.9. This inequality implies that the ancient solution is locally Type I wherever the reduced distance is bounded. This fact, along with the noncollapsing assumption, implies the existence of an asymptotic shrinker. 2. (2) If, in addition to the bounded and nonnegative curvature operator condition, the ancient solution is $\kappa$-noncollapsed, then by Perelman’s bounded curvature at bounded distance theorem (c.f. Section 11 of [Per02]), the curvature scale $\displaystyle r(x):=R(x)^{-\frac{1}{2}}$ is comparable with the curvature radius $\displaystyle r_{{\operatorname{Rm}}}(x):=\sup\\{s:|{\operatorname{Rm}}|\leq s^{-2}\text{ on }B(x,s)\\}$, and hence the geometry of the parabolic cube $B_{g(\tau)}(x,r)\times[\tau-r^{2},\tau+r^{2}]$, where $r:=R(x,\tau)^{-\frac{1}{2}}$, is bounded in terms of $r$. This fact is crucial to the point-wise estimate of the conjugate heat kernel. As we will find out later in this paper, among the above two points, the latter is not as essential as the former. Indeed, the main technique of proving Theorem 1.9 is to replace the curvature scale $R^{-\frac{1}{2}}$ by $r(x)=\sqrt{\tau}\ell^{-\frac{1}{2}}$. The latter scale, though not directly related to the curvature, serves well for the purpose mentioned in point (2) above. Once this is established, we can use a similar argument as in [ZY2] to estimate the conjugate heat kernel. In fact, this scale is also implemented in the proof of the quadratic lower bound of $\ell$; see Lemma 3.2 in [Y]. The proof of Theorem 1.9 together with some of Bamler’s [Bam20a] results on the Nash entropy also implies the following interesting corollary, which is an extension of a result of Xu [Xu17]. ###### Corollary 1.11 (Entropy uniqueness of the asymptotic shrinker). Let $(M,g(t))_{t\in(-\infty,0]}$ be a $\kappa$-noncollapsed ancient solution with bounded curvature within each compact time interval. Assume either 1. (1) $g(t)$ has Type I curvature bound, i.e., there is a constant $C_{\rm I}<\infty$ such that $\sup_{M}|{\operatorname{Rm}}|_{g(t)}\leq\frac{C_{\rm I}}{1+|t|},$ for all $t\leq 0,$ or 2. (2) $|{\operatorname{Rm}}|_{g(t)}\leq CR_{g(t)}$ for some positive constant $C$ and for all $t\in(-\infty,0]$, and Hamilton’s trace Harnack (1.10) holds on $(M,g(t))$. Then, all asymptotic shrinkers of $(M,g(t))$ based at any point in $M\times(-\infty,0]$ have the same entropy, which is also equal to the logarithmic of the Gaussian density as defined in [CHI04]. Furthermore, for any $(x_{1},t_{1})$ and $(x_{2},t_{2})\in M\times(-\infty,0]$, the following holds. $\displaystyle\lim_{\tau\rightarrow\infty}\ \mathcal{W}_{(x_{1},t_{1})}(\tau)=\lim_{\tau\rightarrow\infty}\ \mathcal{W}_{(x_{2},t_{2})}(\tau)=\lim_{\tau\rightarrow\infty}\ \mathcal{N}_{(x_{1},t_{1})}(\tau)=\lim_{\tau\rightarrow\infty}\ \mathcal{N}_{(x_{2},t_{2})}(\tau)$ (1.12) $\displaystyle=\lim_{\tau\rightarrow\infty}\ \log V_{(x_{1},t_{1})}(\tau)=\lim_{\tau\rightarrow\infty}\ \log V_{(x_{2},t_{2})}(\tau).$ One may naturally ask, why assumption (2) in the statement of Theorem 1.9 is meaningful at all, since Hamilton’s trace Harnack is not known to hold without any strong curvature positivity assumption; see [Ha1] and [Br09]. In the present paper, we also show that this condition holds on steady Ricci solitons assuming only nonnegative Ricci curvature. Recall that a Riemannian manifold $(M^{n},g)$ is said to have a Ricci soliton structure, if there is a smooth vector field $X$ such that $2\operatorname{Ric}+\mathcal{L}_{X}g=\lambda g,$ for some real number $\lambda.$ The Ricci soliton is called shrinking, steady, or expanding if $\lambda$ is positive, zero, or negative, respectively. The Ricci soliton is gradient if $X=\nabla f$ for some smooth function $f$, which is usually called the _potential function_. Let $(M^{n},g,f)$ be a complete gradient Ricci soliton with potential function $f,$ that is, $\operatorname{Ric}+\nabla^{2}f=\tfrac{\lambda}{2}g,$ for some real number $\lambda.$ Set $\tau(t)=1-\lambda t,$ and define $\Phi_{t}$ to be the $1$-parameter family of diffeomorphisms generated by $\frac{1}{\tau(t)}\nabla^{g}f$ with $\Phi_{0}={\rm id}$. Then $g(t):=\tau(t)\Phi_{t}^{*}g$ is a Ricci flow and is called the _canonical form_ of the gradient Ricci soliton. For the standard properties of Ricci solitons, see, for example, [CCGG+07] and the references therein. The following theorem is an application of our main theorem. ###### Theorem 1.13. Let $(M^{n},g(\tau))_{\tau\in[0,\infty)}$ be the canonical form of a steady gradient Ricci soliton with nonnegative Ricci curvature, where $\tau$ is the backward time. Then Hamilton’s trace Harnack inequality (1.10) holds. Consequently, if $|{\operatorname{Rm}}|\leq CR<\infty$ for some constant $C$, then $g(\tau)$ is $\kappa$-noncollapsed if and only if it has bounded entropy, where $\kappa$ and the entropy bound are mutually dependent on. In the case where $g(\tau)$ is non-flat and $\kappa$-noncollapsed, it has a non-flat asymptotic shrinker. This paper is organized as follows. In section 2 we show that a noncollapsed ancient solution satisfying the assumptions in Theorem 1.9 has an asymptotic shrinker, and its entropy converges to that of the asymptotic shrinker. In section 3 we verify Bamler’s gradient estimates [Bam20a] on noncompact Ricci flows with bounded curvature. In section 4 we show that on an ancient solution, Perelman’s entropy and the Nash entropy based at varying points must all converges to the same number; Theorem 1.9 and Corollary 1.11 are proved in this section. In section 5 we apply our main theorem to steady solitons with nonnegative Ricci curvature. ## 2 Asymptotic shrinking gradient Ricci soliton Let $(M,g(\tau))_{\tau\in[0,\infty)}$ be a $\kappa$-noncollapsed ancient Ricci flow with bounded curvature within each compact time interval, where $\tau$ is the backward time, satisfying $\displaystyle|{\operatorname{Rm}}|\leq CR.$ (2.1) Here and henceforth in this section, $C$ stands for a positive constant which may differ from line to line. Furthermore, we assume that $p\in M$ is a fixed point such that item (2) in the statement of Theorem 1.9 holds for $\ell$, the reduced distance based at $(p,0)$. That is, we assume the following estimate for $\ell$. $\displaystyle|\nabla\ell|^{2}+R\leq\frac{C\ell}{\tau}.$ (2.2) From Perelman’s proof of the existence of an asymptotic shrinker (c.f. [Y]), one can easily show that a noncollapsed ancient solution satisfying (2.1) and (2.2) also has an asymptotic shrinker. Indeed, the conditions above are sufficient to prove quadratic upper and lower bounds for $\ell$ (c.f. Lemma 3.2 in [Y]): $\displaystyle\ell(x,\tau)\sim\frac{1}{\tau}{\rm dist}_{\tau}^{2}(x,y)+C,$ (2.3) where $y\in M$ is a point at which $\ell(\cdot,\tau)$ is bounded by $C$. However, it is worth mentioning that (2.3) itself is not sufficient to show that the integral $\displaystyle(4\pi\tau)^{-\frac{n}{2}}e^{-\ell}$ of the reduced volume is uniformly negligible outside a large ball, yet this fact can be obtained by Hein-Naber’s Gaussian concentration theorem [HN14]; see the argument in the line above (2.34); all these facts are sufficient to show the existence of an asymptotic shrinker. In this section, we will show that the entropy of the ancient solution converges to that of its asymptotic shrinker. The main result of this section is the following. ###### Proposition 2.4. Let $(M,g(\tau))_{\tau\in[0,\infty)}$ be a $\kappa$-noncollapsed ancient solution with bounded curvature within each compact time interval and satisfying (2.1). Let $p\in M$ be a fixed point and $\ell$ the reduced distance based at $(p,0)$ such that (2.2) holds. Then, for any sequence of positive numbers $\tau_{i}\nearrow\infty$, if the sequence of points $\\{x_{i}\\}_{i=1}^{\infty}\in M$ satisfies $\limsup_{i\rightarrow\infty}\ell(x_{i},\tau_{i})<\infty,$ then the following convergence happens after passing to a subsequence $\displaystyle\Big{(}M,g_{i}(\tau),(x_{i},1),\ell_{i}\Big{)}_{\tau\in[1,2]}\rightarrow\Big{(}M_{\infty},g_{\infty}(\tau),(x_{\infty},1),\ell_{\infty}\Big{)}_{\tau\in[1,2]},$ where the limit is the canonical form of a Ricci shrinker, $g_{i}(\tau)=\tau_{i}^{-1}g(\tau\tau_{i})$, and $\ell_{i}(\cdot,\tau)=\ell(\cdot,\tau\tau_{i})$. Here the Ricci flows converge in the Cheeger-Gromov-Hamilton sense [Ha2], and the functions $\ell_{i}$ converge in the weak $*W_{\text{loc}}^{1,2}(M_{\infty}\times[1,2])$ sense as well as in the $C_{\text{loc}}^{0,\alpha}(M_{\infty}\times[1,2])$ sense, with arbitrarily fixed $\alpha\in(0,1)$. Furthermore, we have $\displaystyle\lim_{\eta\rightarrow\infty}\mathcal{W}_{(p,0)}(\eta)=\lim_{\eta\rightarrow\infty}\log V_{(p,0)}(\eta)=\log\left(\int_{M_{\infty}}(4\pi\tau)^{-\frac{n}{2}}e^{-\ell_{\infty}}dg_{\infty}(\tau)\right),$ (2.5) where $\mathcal{W}_{(p,0)}(\eta)$ is Perelman’s entropy and $V_{(p,0)}(\eta)$ is Perelman’s reduced volume, both based at $(p,0)$. _Remark:_ As indicated above, due to (2.1), (2.2), and (2.3), the existence statement of an asymptotic shrinker in Proposition 2.4 is almost straightforward. The main focus of Proposition 2.4 is the equality (2.5). Xu [Xu17] first proved it for Type I noncollapsed ancient solutions, and the second author [ZY2] proved it in the bounded and nonnegative curvature operator case. In the rest of this section, $\displaystyle u:=(4\pi\tau)^{-\frac{n}{2}}e^{-f}$ (2.6) denotes the conjugate heat kernel based at $(p,0)$—the same base point as that of $\ell$. Similar to the idea in [ZY2], the major effort of proving (2.5) is to obtain the Gaussian upper and lower bounds for $u$. ###### Lemma 2.7. $\displaystyle(4\pi\tau)^{-\frac{n}{2}}e^{-\ell}$ $\displaystyle\leq$ $\displaystyle u,$ (2.8) $\displaystyle\inf_{M}\ell(\cdot,\tau)$ $\displaystyle\leq$ $\displaystyle\frac{n}{2}.$ (2.9) ###### Proof. (2.8) follows from the fact that the left-hand-side is a subsolution to the conjugate heat equation, and when $\tau\rightarrow 0+$, both sides converge to the Dirac delta measure concentrated at $p\in M$. (2.9) simply follows from a maximum principle; see [Per02]. ∎ Henceforth we will use $p_{\tau}\in M$ to denote a point satisfying $\ell(p_{\tau},\tau)\leq\frac{n}{2}$. ###### Lemma 2.10. $\displaystyle\left|\frac{\partial\ell}{\partial\tau}\right|\leq\frac{C\ell}{\tau}.$ (2.11) ###### Proof. This inequality follows from the assumption (2.2), in combination with the following well-known formula of Perelman (see Lemma 2.22 in [Y]) $\displaystyle 2\frac{\partial\ell}{\partial\tau}+|\nabla\ell|^{2}-R+\frac{\ell}{\tau}=0.$ ∎ In view of inequalities (2.2) and (2.11), let us define the scale $\displaystyle r(x,\tau)=\sqrt{\tau}\min\\{c_{0},\ell(x,\tau)^{-\frac{1}{2}}\\},$ where $c_{0}\in(0,\frac{1}{2})$ is a small constant which we will determine in the course of the proof. Note that $\ell$ is always positive on ancient solutions, hence $r$ is well-defined. Let us then consider the parabolic neighborhood centered at $(x,\tau)$ with radius $r(x,\tau)$. ###### Lemma 2.12. If $c_{0}$ is taken to be small enough, then the following holds. Let $(x_{0},\tau_{0})\in M\times(0,\infty)$ and $r_{0}:=r(x_{0},\tau_{0})$. Then $\displaystyle r(y,\tau_{0})\geq\frac{4}{5}r_{0},\quad\text{ for all }\quad y\in B_{g(\tau_{0})}(x_{0},r_{0}).$ (2.13) ###### Proof. By (2.2) and the definition of $r$, if ever $r(\cdot,\tau)<c_{0}\sqrt{\tau}$, then the following computation is valid. $\displaystyle|\nabla r|=\tau^{\frac{1}{2}}|\nabla\ell^{-\frac{1}{2}}|=\frac{\tau^{\frac{1}{2}}}{2\ell^{\frac{3}{2}}}|\nabla\ell|\leq\frac{C}{2}\ell^{-1}=\frac{C}{2\tau}r^{2},$ namely, $\displaystyle|\nabla r^{-1}|\leq\frac{C}{2\tau}\quad\text{ whenever }\quad r^{-1}>\frac{1}{c_{0}\sqrt{\tau}}.$ Therefore, for any $y\in B_{g(\tau_{0})}(x_{0},r_{0})$, integrating the above inequality along a $g(\tau_{0})$-geodesic connecting $x_{0}$ and $y$, we have $\displaystyle\frac{1}{r(y,\tau_{0})}\leq\frac{1}{r_{0}}+\frac{C}{2\tau_{0}}r_{0}=\frac{1}{r_{0}}+\frac{Cr_{0}^{2}}{2\tau_{0}}\frac{1}{r_{0}}\leq\left(1+\frac{Cc_{0}^{2}}{2}\right)\frac{1}{r_{0}}\leq\frac{5}{4r_{0}},$ where the last inequality holds if $c_{0}$ is taken to be small enough. We have also used the fact that $r_{0}\leq c_{0}\sqrt{\tau_{0}}$ by definition. ∎ ###### Lemma 2.14. If $c_{0}$ is taken to be small enough, then the following holds. Let $(x_{0},\tau_{0})\in M\times(0,\infty)$ and $r_{0}:=r(x_{0},\tau_{0})$. Then $\displaystyle r(x_{0},\tau)\geq\frac{4}{5}r_{0}\text{ for all }\tau\in[\tau_{0}-r_{0}^{2},\tau_{0}+r_{0}^{2}]$ ###### Proof. By (2.11) and the definition of $r$, so long as $r(\cdot,\tau)<c_{0}\sqrt{\tau}$, the following computation is valid $\displaystyle\left|\frac{\partial}{\partial\tau}r\right|$ $\displaystyle=$ $\displaystyle\frac{1}{2}\left|\frac{1}{\tau^{\frac{1}{2}}\ell^{\frac{1}{2}}}-\frac{\tau^{\frac{1}{2}}}{\ell^{\frac{3}{2}}}\frac{\partial}{\partial\tau}\ell\right|$ $\displaystyle\leq$ $\displaystyle\frac{C}{\tau^{\frac{1}{2}}\ell^{\frac{1}{2}}}=\frac{C}{\tau}r,$ namely, $\displaystyle\left|\frac{\partial}{\partial\tau}\log\left(\frac{r}{\sqrt{\tau}}\right)\right|\leq\frac{C}{\tau}\quad\text{ whenever }\quad\log\left(\frac{r}{\sqrt{\tau}}\right)<\log c_{0}\ll 0.$ Integrating the above inequality from $\tau_{0}$ to $\tau\in[\tau_{0}-r_{0}^{2},\tau_{0}+r_{0}^{2}]$, we have $\displaystyle\log\left(\frac{r(x_{0},\tau)}{\sqrt{\tau}}\right)$ $\displaystyle\geq$ $\displaystyle\log\left(\frac{r(x_{0},\tau_{0})}{\sqrt{\tau_{0}}}\right)-C\left|\int_{\tau}^{\tau_{0}}\frac{1}{\eta}d\eta\right|$ $\displaystyle\geq$ $\displaystyle\log\left(\frac{r(x_{0},\tau_{0})}{\sqrt{\tau_{0}}}\right)-C\log\left(\frac{1+c_{0}^{2}}{1-c_{0}^{2}}\right),$ where the last inequality is because of the fact $\tau\in[\tau_{0}-r_{0}^{2},\tau_{0}+r_{0}^{2}]\subset\big{[}(1-c_{0}^{2})\tau_{0},(1+c_{0}^{2})\tau_{0}\big{]}$. Therefore, taking $c_{0}$ to be small enough, the lemma follows. ∎ Summarizing the previous two lemmas, we obtain the following result. This shows that the scale $r$ which we defined above serves effectively as a local curvature scale. This fact can make up for the lack of bounded curvature at bounded distance. ###### Proposition 2.15. Let $c_{0}$ be a small constant such that Lemma 2.12 and Lemma 2.14 both hold. Let $(x_{0},\tau_{0})\in M\times(0,\infty)$ and $r_{0}=r(x_{0},\tau_{0})$. Then the following are true. 1. (1) $\displaystyle r\geq\frac{1}{2}r_{0}\text{ on }B_{g(\tau_{0})}\left(x_{0},r_{0}\right)\times\left[\tau_{0}-r_{0}^{2},\tau_{0}+r_{0}^{2}\right]$, 2. (2) $\displaystyle|{\operatorname{Rm}}|\leq Cr_{0}^{-2}\text{ on }B_{g(\tau_{0})}\left(x_{0},r_{0}\right)\times\left[\tau_{0}-r_{0}^{2},\tau_{0}+r_{0}^{2}\right]$, 3. (3) $\displaystyle\left|\partial^{p}_{\tau}\nabla^{q}{\operatorname{Rm}}\right|\leq C_{p,q}r_{0}^{-2-2p-q}\text{ on }B_{g(\tau_{0})}\left(x,\tfrac{1}{2}r_{0}\right)\times\left[\tau-\tfrac{1}{4}r_{0}^{2},\tau+\tfrac{1}{4}r_{0}^{2}\right]$, where $C_{p,q}$ is a constant depending on $p$ and $q$. 4. (4) ${\rm Vol}\Big{(}B_{g(\tau_{0})}(x_{0},r_{0})\Big{)}\geq cr_{0}^{n}$, where $c$ is a constant depending on $\kappa$. ###### Proof. (1) is a combination of Lemma 2.12 and Lemma 2.14. (2) follows from the definition of $r$ and the assumptions (2.1) and (2.2). (3) follows from the localized Shi’s estimates [Sh]. (4) follows from (2) and the noncollapsing assumption. ∎ The following Gaussian concentration theorem proved by Hein and Naber in [HN14] is the fundamental technique in the proof of the integral upper bound for the conjugate heat kernel. ###### Proposition 2.16 (Hein-Naber’s Gaussian Concentration). We have $\displaystyle\nu_{\tau}(A)\nu_{\tau}(B)\leq\exp\left(-\frac{1}{8\tau}{\rm dist}_{\tau}^{2}(A,B)\right),$ for any subsets $A$ and $B\subset M$ and for all $\tau>0$, where $\displaystyle\nu_{\tau}(A)=\int_{A}u\,dg(\tau)$ is a probability measure on $M$. We are ready to prove the following Gaussian upper bound for the conjugate heat kernel. ###### Proposition 2.17. Let $u$ be the conjugate heat kernel based at $(p,0)$ and $p_{\tau}\in M$ a time-dependent point satisfying $\ell(p_{\tau},\tau)\leq\frac{n}{2}$. Then $u$ has a time-invariant Gaussian upper bound centered at $p_{\tau}$, namely, $\displaystyle u(x,\tau)\leq\frac{C}{(4\pi\tau)^{\frac{n}{2}}}\exp\left(-\frac{{\rm dist}_{\tau}^{2}(p_{\tau},x)}{C\tau}\right)$ for all $(x,\tau)\in M\times(0,\infty)$, where $C$ is a constant independent of time. ###### Proof. Let $\tau\in(0,\infty)$ be arbitrarily fixed. Then, by (2.2) and the noncollapsing assumption, the following hold on $B_{g(\tau)}(p_{\tau},\sqrt{\tau})$. $\displaystyle\ell$ $\displaystyle\leq$ $\displaystyle C,$ $\displaystyle|\operatorname{Rm}|\leq CR$ $\displaystyle\leq$ $\displaystyle\frac{C}{\tau},$ $\displaystyle\text{Vol}\Big{(}B_{g(\tau)}(p_{\tau},\sqrt{\tau})\Big{)}$ $\displaystyle\geq$ $\displaystyle c\tau^{\frac{n}{2}}.$ By (2.8), if we take $A=B_{g(\tau)}(p_{\tau},\sqrt{\tau})$, then we have $\displaystyle\nu_{\tau}(A)\geq\int_{B_{g(\tau)}(p_{\tau},\sqrt{\tau})}(4\pi\tau)^{-\frac{n}{2}}e^{-\ell}dg_{\tau}\geq c.$ Fix an arbitrary $x\in M$ and let $r_{0}=r(x,\tau)\leq c_{0}\sqrt{\tau}$. Then, by applying Proposition 2.16 with $A=B_{g(\tau)}(p_{\tau},\sqrt{\tau})$ and $\displaystyle B=B_{g(\tau)}\left(x,r_{0}\right)$, we have $\displaystyle\int_{B_{g(\tau)}\left(x,r_{0}\right)}u\,dg(\tau)$ $\displaystyle=$ $\displaystyle\nu_{\tau}(B)\leq\nu_{\tau}(A)^{-1}\exp\left(-\frac{1}{8\tau}{\rm dist}_{\tau}^{2}(A,B)\right)$ $\displaystyle\leq$ $\displaystyle C\exp\left(-\frac{1}{8\tau}\big{(}{\rm dist}_{\tau}(p_{\tau},x)-\sqrt{\tau}-r_{0}\big{)}^{2}\right).$ $\displaystyle\leq$ $\displaystyle C\exp\left(-\frac{1}{16\tau}{\rm dist}^{2}_{\tau}(p_{\tau},x)\right).$ Next, we extend the above integral bound to space-time. (2.11) implies that $\displaystyle\ell(p_{\tau},\tau^{\prime})\leq C\quad\text{ for all }\quad\tau^{\prime}\in[\tau- r_{0}^{2},\tau]\subset\big{[}(1-c_{0}^{2})\tau,\tau\big{]}.$ (2.19) Then, implementing the same argument as before at $\tau^{\prime}\in[\tau- r_{0}^{2},\tau]\subset\big{[}(1-c_{0}^{2})\tau,\tau\big{]}$, with $A=B_{g(\tau^{\prime})}(p_{\tau},\sqrt{\tau^{\prime}})$ and $B=B_{g(\tau^{\prime})}(x,r^{\prime})$, where $r^{\prime}=r(x,\tau^{\prime})$, we obtain $\displaystyle\int_{B_{g(\tau^{\prime})}(x,r^{\prime})}u\,dg(\tau^{\prime})\leq C\exp\left(-\frac{1}{16\tau}{\rm dist}^{2}_{\tau^{\prime}}(p_{\tau},x)\right),$ (2.20) for all $\tau\in[\tau-r_{0}^{2},\tau]$. Because of Proposition 2.15 (1)(2), we may find a small constant $c_{1}\in(0,\frac{1}{4})$, such that $\displaystyle B_{g(\tau)}(x,c_{1}r_{0})\subset B_{g(\tau^{\prime})}\left(x,r^{\prime}\right)\text{ for all }\tau\in[\tau- r_{0}^{2},\tau].$ Hence, (2.20) can be rewritten as $\displaystyle\int_{B_{g(\tau)}(x,c_{1}r_{0})}u\,dg(\tau^{\prime})\leq C\exp\left(-\frac{1}{16\tau}{\rm dist}^{2}_{\tau^{\prime}}(p_{\tau},x)\right).$ (2.21) Note that the distance on the right-hand-side is in terms of $g(\tau^{\prime})$. This can be dealt with by Perelman’s distance distortion estimate. Indeed, the definition of $r_{0}$ and (2.19) imply that $r(p_{\tau},\tau^{\prime})\geq c\sqrt{\tau}\geq cr_{0}$ for all $\tau^{\prime}\in[\tau-r_{0}^{2},\tau]$. In combination with the fact that $r(x,\tau^{\prime})\geq\frac{1}{2}r_{0}$ for all $\tau^{\prime}\in[\tau- r_{0}^{2},\tau]$ as well as Proposition 2.15(2), we have $\displaystyle{\operatorname{Ric}}_{g(\tau^{\prime})}\leq Cr_{0}^{-2}\quad\text{ on }\quad B_{g(\tau^{\prime})}(x,c_{1}r_{0})\cup B_{g(\tau^{\prime})}(p_{\tau},cr_{0})$ for all $\tau^{\prime}\in[\tau-r_{0}^{2},\tau]$. Hence, by using Lemma 8.3 in [Per02], we obtain $\displaystyle\frac{d}{ds}{\rm dist}_{s}(p_{\tau},x)\leq Cr_{0}^{-2}\quad\text{ for all }s\in\left[\tau-r_{0}^{2},\tau\right].$ Integrating the above inequality from $\tau^{\prime}\in[\tau-r_{0}^{2},\tau]$ to $\tau$, we have $\displaystyle{\rm dist}_{\tau^{\prime}}(p_{\tau},x)\geq{\rm dist}_{\tau}(p_{\tau},x)-C.$ (2.22) Combining (2.21) and (2.22), we obtain $\displaystyle\int_{\tau- r_{0}^{2}}^{\tau}\int_{B_{g(\tau)}\left(x,c_{1}r_{0}\right)}u\,dg_{\tau^{\prime}}d\tau^{\prime}\leq Cr_{0}^{2}\exp\left(-\frac{1}{16\tau}{\rm dist}^{2}_{\tau}(p_{\tau},x)\right).$ (2.23) Since Proposition 2.15 provides good geometry bounds on the parabolic ball $B_{g(\tau)}\left(x,c_{1}r_{0}\right)\times[\tau-r_{0}^{2},\tau]$, we then obtain the pointwise bound by using the standard parabolic mean value inequality (c.f. Lemma 3.1 in [CTY]). $\displaystyle u(x,\tau)$ $\displaystyle\leq$ $\displaystyle\frac{C}{r_{0}^{n+2}}\int_{\tau- r_{0}^{2}}^{\tau}\int_{B_{g(\tau)}\left(x,c_{1}r_{0}\right)}u\,dg_{\tau^{\prime}}d\tau^{\prime}$ $\displaystyle\leq$ $\displaystyle Cr_{0}^{-n}\exp\left(-\frac{1}{16\tau}{\rm dist}^{2}_{\tau}(p_{\tau},x)\right).$ $\displaystyle\leq$ $\displaystyle C(4\pi\tau)^{-\frac{n}{2}}\exp\left(-\frac{1}{16\tau}{\rm dist}^{2}_{\tau}(p_{\tau},x)-C\log\frac{r_{0}}{\sqrt{\tau}}\right).$ Finally, to deal with the last term in the above formula, recall the definition of $r_{0}:=r(x,\tau)=\sqrt{\tau}\min\\{c_{0},\ell(x,\tau)^{-\frac{1}{2}}\\}$, we then compute $\displaystyle-C\log\big{(}\frac{r_{0}}{\sqrt{\tau}}\big{)}$ $\displaystyle\leq$ $\displaystyle C\log\Big{(}\max\\{c_{0}^{-1},\ell(x,\tau)^{\frac{1}{2}}\\}\Big{)}$ $\displaystyle\leq$ $\displaystyle C\log\left(c_{0}^{-1}+C+C\frac{{\rm dist}^{2}_{\tau}(p_{\tau},x)}{\sqrt{\tau}}\right)$ $\displaystyle\leq$ $\displaystyle\frac{1}{32}\frac{{\rm dist}^{2}_{\tau}(p_{\tau},x)}{\tau}+C,$ where we have used the trivial observation that the logarithmic function grows slower than the quadratic function. Precisely, for any $\varepsilon>0$, there exists $C(\varepsilon)>0$, such that $\displaystyle\log x\leq\varepsilon x^{2}+C(\varepsilon)\text{ for all }x>0;$ note that the quadratic upper bound of $\ell$ is merely a consequence of (2.2). Combining (2) and (2), the conclusion of the proposition follows. ∎ Proposition 2.17 has the following immediate implication. ###### Corollary 2.26. For every $\tau\in(0,\infty)$, let $p_{\tau}\in M$ be a point such that $\ell(p_{\tau},\tau)\leq\frac{n}{2}$. Then $u$ has the following Gaussian upper and lower bounds $\displaystyle\frac{1}{C(4\pi\tau)^{\frac{n}{2}}}\exp\left(-\frac{C}{\tau}{\rm dist}^{2}_{\tau}(p_{\tau},x)\right)\leq u(x,\tau)\leq\frac{C}{(4\pi\tau)^{\frac{n}{2}}}\exp\left(-\frac{1}{C\tau}{\rm dist}^{2}_{\tau}(p_{\tau},x)\right),$ (2.27) for all $(x,\tau)\in M\times(0,\infty)$, where $C$ is a constant independent of $\tau$. Furthermore, the reduced distance $\ell$ satisfies the following quadratic estimates $\displaystyle\frac{1}{C\tau}{\rm dist}_{\tau}^{2}(p_{\tau},x)-C\leq\ell(x,\tau)\leq\frac{C}{\tau}{\rm dist}_{\tau}^{2}(p_{\tau},x)+C,$ (2.28) for all $(x,\tau)\in M\times(0,\infty)$, where $C$ is a constant independent of $\tau$. ###### Proof. The second inequality of (2.28) follows from the assumption (2.2); this fact, in combination with (2.8), implies the first inequality of (2.27). The first inequality of (2.28) follows from Proposition 2.17 and (2.8). ∎ ###### Proof of Proposition 2.4. We only prove the equality (2.5). As to the existence of the asymptotic shrinker, one may refer to either the comments at the beginning of this section, or the arguments of Proposition 4.2 in [ZY2]. First of all, let us fix the sequences of points $\\{x_{i}\\}_{i=1}^{\infty}\subset M$ and positive numbers $\tau_{i}\nearrow\infty$ as in the statement of the proposition. Then, we define $\displaystyle f_{i}(\cdot,\tau)=f(\cdot,\tau\tau_{i}),\ \ u_{i}=(4\pi\tau)^{-\frac{n}{2}}e^{-f_{i}},$ where, as before, $\displaystyle u:=(4\pi\tau)^{-\frac{n}{2}}e^{-f}$ is the conjugate heat kernel based at $(p,0)$. We will prove that $\\{f_{i}\\}_{i=1}^{\infty}$ also smoothly converges to the potential function on the asymptotic shrinker. First, we verify that the base points $x_{i}$ are equivalent to $p_{\tau_{i}}$. By (2.11), we have $\displaystyle\ell(x_{i},\tau)\leq\ell(x_{i},\tau_{i})\left(\frac{\tau}{\tau_{i}}\right)^{C}\leq\frac{n}{2}\cdot 2^{C}\leq C\text{ for all }\tau\in\left[\frac{1}{2}\tau_{i},2\tau_{i}\right].$ (2.29) It then follows from (2.28) that $\displaystyle{\rm dist}_{\tau}(x_{i},p_{\tau})\leq C\sqrt{\tau_{i}}\quad\text{ for all }\quad\tau\in\left[\frac{1}{2}\tau_{i},2\tau_{i}\right].$ Hence, after the parabolic scaling, the Gaussian upper and lower bounds (2.27) become $\displaystyle\frac{1}{C}\cdot{\rm dist}^{2}_{g_{i}(\tau)}(x_{i},x)-C\leq f_{i}(x,\tau)\leq C\cdot{\rm dist}^{2}_{g_{i}(\tau)}(x_{i},x)+C,$ (2.30) for all $\tau\in[\frac{1}{2},2]$. Arguing as Proposition 3.3 in [ZY2], one may immediately obtain uniform growth estimates for the derivatives of $f_{i}$; indeed, all gradients of $f_{i}$ have uniformly polynomial growth bounds. Hence we have $\displaystyle f_{i}$ $\displaystyle\rightarrow$ $\displaystyle f_{\infty}$ $\displaystyle u_{i}$ $\displaystyle\rightarrow$ $\displaystyle u_{\infty}=(4\pi\tau)^{-\frac{n}{2}}e^{-f_{\infty}}$ locally smoothly on $M\times[1,2]$. For the convergence of the entropy and of the integral of $u_{i}$, we need to show that the integral of $u_{i}$ outside a large ball is negligible. Note that unlike the case of [ZY2], we do not have a lower curvature bound to apply volume comparison theorem. Nevertheless, the Gaussian concentration theorem is sufficient. Fixing a large number $\rho\gg 1$, we apply Proposition 2.16 to $A=B_{g(\tau)}(x_{i},\sqrt{\tau})$ and $B=M\setminus B_{g(\tau)}(x_{i},\rho\sqrt{\tau_{i}})$ for all $\tau\in[\tau_{i},2\tau_{i}]$. Since, by (2.29), $\ell(x_{i},\tau)$ is bounded from above by a constant, we may argue as the beginning of the proof of Proposition 2.17 to obtain that $\nu_{\tau}(A)\geq c>0$. Hence $\displaystyle\nu_{\tau}(B)\leq\exp\left(-\frac{1}{16\tau}\rho^{2}\right),$ namely, $\displaystyle\int_{M\setminus B_{g_{i}(\tau)}(x_{i},\rho)}u_{i}\,dg_{i}(\tau)\leq C\exp(-c\rho^{2})\quad\text{ for all }\quad\tau\in[1,2].$ (2.31) With this estimate, one may continue arguing as the proof of [ZY2, Proposition 4.2] to obtain $\displaystyle\lim_{\eta\rightarrow\infty}\mathcal{W}_{(p,0)}(\eta)$ $\displaystyle=$ $\displaystyle\int_{M_{\infty}}\Big{(}\tau\big{(}|\nabla f_{\infty}|^{2}+R_{g_{\infty}(\tau)}\big{)}+f_{\infty}-n\Big{)}u_{\infty}\,dg_{\infty}(\tau).$ (2.32) $\displaystyle\lim_{i\rightarrow\infty}\int_{M}u_{i}\,dg_{i}(\tau)$ $\displaystyle=$ $\displaystyle\int_{M_{\infty}}(4\pi\tau)^{-\frac{n}{2}}e^{-f_{\infty}}dg_{\infty}(\tau)=1.$ (2.33) Since the right-hand-side of (2.32) is a constant, we immediately obtain, by Perelman’s monotonicity formula, that $f_{\infty}$ is the potential function of the asymptotic shrinker $\displaystyle\operatorname{Ric}_{\infty}+\nabla^{2}f_{\infty}=\frac{1}{2\tau}g_{\infty}.$ The quantity on the right-hand-side of (2.32) is usually called the _entropy of the shrinker_. To show the second equality of (2.5), we need only to verify that the integrand of the reduced volume is negligible outside a large ball. This can be verified by using (2.8) and (2.31). Hence we have $\displaystyle\int_{M_{\infty}}(4\pi\tau)^{-\frac{n}{2}}e^{-\ell_{\infty}}dg_{\infty}(\tau)=\lim_{i\rightarrow\infty}\int_{M}(4\pi\tau)^{-\frac{n}{2}}e^{-\ell_{i}}dg_{i}(\tau)=\lim_{\eta\rightarrow\infty}V_{(p,0)}(\eta),$ (2.34) for all $\tau\in[1,2]$. The first equality in (2.5) is proved by comparing the different normalizations of $\ell_{\infty}$ and $f_{\infty}$; note that they are different potential functions of the same shrinker. This was first pointed out by Carrillo and Ni [CaN09]. Since we already have (2.32), (2.33), and (2.34), one may then argue as the proof of Theorem 1.2 in [ZY2]; this finishes the proof of the proposition. ∎ ## 3 Bamler’s gradient estimate on noncompact manifolds In this section, we vindicate Bamler’s sharp gradient estimates [Bam20a] on noncompact manifolds. Assuming bounded curvature, these results should be readily established by using his method. We include this proof for the convenience of the readers. For a function $u$ defined on $M\times[a,b]$, we write $u_{t}=u(\cdot,t).$ Following [Bam20a], we consider the function $\displaystyle\Phi(x):=\frac{1}{\sqrt{4\pi}}\int_{-\infty}^{x}\exp(-y^{2}/4)\,dy.$ Then $\Phi_{t}(x):=\Phi\left(t^{-1/2}x\right)$ is a solution to the 1-dimensional heat equation $\partial_{t}\Phi_{t}=\Phi_{t}^{\prime\prime}$ with initial condition $\chi_{[0,\infty)}.$ Using Bamler’s method, we have the following. ###### Theorem 3.1 (Theorem 4.1 in [Bam20a]). Let $(M,g(t))_{t\in[t_{0},t_{1}]}$ be a complete Ricci flow with $|{\operatorname{Rm}}|\leq\Lambda$ on $M\times[t_{0},t_{1}]$ for some constant $\Lambda<\infty.$ Consider a solution $u$ to the heat equation coupled with $g(t)$ and assume that $u$ takes values in $(0,1).$ Let $T\geq 0$ and suppose $|\nabla\Phi_{T}^{-1}(u_{t_{0}})|_{g(t_{0})}\leq 1$ if $T>0.$ Then $|\nabla\Phi_{T+t-t_{0}}^{-1}(u_{t})|_{g(t)}\leq 1$ for any $t\in(t_{0},t_{1}].$ Following Bamler, we wish to apply a version of maximum principle for noncompact Ricci flows [CCGG+08, Theorem 12.14] to $|\nabla\Phi_{t}^{-1}(u_{t})|^{2}$. To do so an a priori bound is necessary. The following lemma, whose detailed proof is left to the readers, is a consequence of the Bernstein-Bando-Shi technique (c.f. [Ko07]) as well as the well know Bochner formula for a solution $u$ to the heat equation coupled with the Ricci flow $\displaystyle\Box|\nabla u|^{2}=-2|\nabla^{2}u|^{2}\leq 0,$ where $\Box=\partial_{t}-\Delta_{g(t)}$ is the heat operator coupled with the Ricci flow. ###### Lemma 3.2. Let $(M^{n},g(t))_{t\in[0,1]}$ be a complete Ricci flow with bounded curvature. Suppose that $u$ is a solution to the heat equation coupled with $g(t)$ satisfying $|u|\leq 1$ on $M\times[0,1]$. If $|\nabla u_{0}|^{2}\leq A,$ where $A$ is a positive number, then $|\nabla u_{t}|^{2}\leq A$ for all $t\in[0,1].$ If there is no initial gradient bound, then we have $t|\nabla u_{t}|^{2}\leq 25$ on $M\times[0,1]$. ###### Proof of Theorem 3.1. We first consider the case where $T>0.$ By parabolic rescaling and time shifting, we may assume $0<t_{0}=T<1$ and $t_{1}\geq 1$. It suffices to show $|\nabla\Phi_{1}^{-1}(u_{1})|_{g(1)}\leq 1$ under the assumption $|\nabla\Phi_{T}^{-1}(u_{T})|_{g(T)}\leq 1$. In the following, we shall omit the subindeces and the reader should keep in mind that the norms of the gradients are computed using the evolving metric. We write $h_{t}=\Phi_{t}^{-1}(u_{t}).$ For any small $\epsilon>0,$ let $u^{(\epsilon)}_{t}:=\epsilon+(1-2\epsilon)u_{t}\in(\epsilon,1-\epsilon),\quad h^{(\epsilon)}_{t}:=\Phi_{t}^{-1}\left(u^{(\epsilon)}_{t}\right).$ Define $A_{\epsilon}=\Phi^{-1}(1-\epsilon)$, then $\left|h^{(\epsilon)}_{t}\right|\leq A_{\epsilon}\sqrt{t}$ for all $t\in[T,1]$. Note that $u^{(\epsilon)}-\frac{1}{2}=(1-2\epsilon)\left(u-\frac{1}{2}\right),$ which means that $u^{(\epsilon)}$ is closer to $1/2$ than $u$. It follows that $\left|h^{(\epsilon)}_{t}\right|=\left|\Phi_{t}^{-1}(u^{(\epsilon)}_{t})\right|\leq|\Phi_{t}^{-1}(u_{t})|=|h_{t}|$ and $\left|\nabla h^{(\epsilon)}_{T}\right|=(1-2\epsilon)\frac{|\nabla u_{T}|}{\Phi_{T}^{\prime}(h^{(\epsilon)}_{T})}\leq\frac{|\nabla u_{T}|}{\Phi_{T}^{\prime}(h_{T})}=|\nabla h_{T}|\leq 1,$ where the first inequality above is due to the definition of $\Phi$, and the last inequality is simply the assumption of the theorem. Hence $\left|\nabla u^{(\epsilon)}_{T}\right|\leq\Phi_{T}^{\prime}(h^{(\epsilon)}_{T})\leq(4\pi T)^{-1/2}.$ By Lemma 3.2, we have $\left|\nabla u^{(\epsilon)}_{t}\right|\leq C\quad\text{ for all }\quad t\in[T,1],$ where $C=(4\pi T)^{-1/2}$. It follows that $\left|\nabla h^{(\epsilon)}_{t}\right|\leq(4\pi t)^{1/2}C\exp(A_{\epsilon}^{2}/4),$ where we have used the fact that $\left|h^{(\epsilon)}_{t}\right|\leq A_{\epsilon}\sqrt{t}$. Hence, $\left|\nabla\frac{\left(h^{(\epsilon)}_{t}\right)^{2}}{2t}\right|$ is bounded on $M\times[T,1].$ By the computations in Theorem 4.1 of [Bam20a], we have $(\partial_{t}-\Delta_{t})\left|\nabla h^{(\epsilon)}\right|^{2}+\nabla\frac{\left(h^{(\epsilon)}\right)^{2}}{2t}\cdot\nabla\left|\nabla h^{(\epsilon)}\right|^{2}=\frac{1-\left|\nabla h^{(\epsilon)}\right|^{2}}{t}\left|\nabla h^{(\epsilon)}\right|^{2}.$ We can apply the maximum principle [CCGG+08, Theorem 12.14] to $|\nabla h^{(\epsilon)}|^{2}-1$, and thereby conclude that $|\nabla h^{(\epsilon)}_{1}|\leq 1.$ Taking $\epsilon\to 0$, we have that $|\nabla h_{1}|\leq 1$ and we have finished the proof in this case. We now consider the case where $T=0.$ This can be proved by a limiting argument as observed in [Bam20b, Lemma 3.8]. By parabolic rescaling and time shifting, we may assume that $T=t_{0}=0$ and $t_{1}\geq 1$. It suffices to prove $|\nabla h_{1}|=|\nabla\Phi_{1}^{-1}(u_{1})|\leq 1,$ where we use the same notations as above. By Lemma 3.2, we have $|\nabla u_{s}|^{2}\leq\frac{25}{s}$ on $M\times(0,1]$. Let $\epsilon\in(0,1/2)$ be arbitrarily fixed. Then, for any $s\in(0,1/2)$ we will define $T=T(\epsilon,s)\in(0,s)$, so that the result in the first case can be applied. To this end, let us compute as follows. $\left|\nabla\Phi_{T}^{-1}(u^{(\epsilon)}_{s})\right|=(4\pi T)^{1/2}\exp\left(\left(\Phi_{T}^{-1}(u^{(\epsilon)}_{s})\right)^{2}/4T\right)(1-2\epsilon)|\nabla u_{s}|\leq C(T/s)^{1/2}\exp(A_{\epsilon}^{2}/4).$ We take $T=T(\epsilon,s)$ small enough so that $\left|\nabla\Phi_{T}^{-1}(u^{(\epsilon)}_{s})\right|\leq 1.$ By the result in the first case, we have $\left|\nabla\Phi_{T+1-s}^{-1}(u^{(\epsilon)}_{1})\right|\leq 1.$ Letting $s\to 0$ and then $\epsilon\to 0,$ we have $|\nabla\Phi_{1}^{-1}(u_{1})|\leq 1.$ ∎ We state the following standard result. ###### Lemma 3.3. Suppose that $\mu$ is a probabily measure on some measure space $(X,\mathcal{B}).$ Let $q:X\to\mathbb{R}$ be a measurable function and let $\phi:\mathbb{R}\to\mathbb{R}$ be absolutely continuous on each compact interval. For any real number $\alpha,$ if $\phi|_{[\alpha,\infty)}$ is monotone, then $\int_{\\{q>\alpha\\}}\phi(q)\,d\mu=\phi(\alpha)\mu(q>\alpha)+\int_{\alpha}^{\infty}\phi^{\prime}(t)\mu(q>t)\,dt.$ ###### Proof. This is a slightly different statement of [Ru, Theorem 8.16]. The proof is the same. One can consider $\phi^{\prime}(t)\chi_{E}(x,t)$ with $E=\\{(x,t)\in X\times[\alpha,\infty):q(x)>t\geq\alpha\\}$ and apply Tonelli’s theorem to its integral. ∎ ###### Proposition 3.4 (Proposition 4.2 in [Bam20a]). Suppose that the same condition as in Theorem 3.1 holds for a complete Ricci flow $(M,g(\cdot))$ defined on $[s,t].$ Write $d\nu=K(x,t\,|\cdot,s)\,dg_{s}$. Then for any $x\in M,1\leq p<\infty,$ and any measurable subset $\Omega\subset M,$ we have $(t-s)^{p/2}\int_{\Omega}\left(\frac{|\nabla_{x}K(x,t\,|\,\cdot,s)|}{K(x,t\,|\,\cdot,s)}\right)^{p}\,d\nu\leq C(n,p)\nu(\Omega)\left(-\log(\nu(\Omega)/2)\right)^{p/2}.$ (3.5) Moreover, for any $x\in M$ and $v\in T_{x}M$ with $|v|_{g(t)}=1$, it holds that $(t-s)\int_{M}\left|\frac{\partial_{v}K(x,t\,|\,\cdot,s)}{K(x,t\,|\,\cdot,s)}\right|^{2}\,d\nu\leq 1/2.$ (3.6) ###### Proof. By parabolic rescaling and time shifting, we may assume that $[s,t]=[0,1].$ Let $q=\frac{\partial_{v}K(x,1\,|\,\cdot,0)}{K(x,1\,|\,\cdot,0)}.$ The proof of [Bam20a, Proposition 4.2] applies here almost without many variations. For completeness, we will sketch the proof and will refer to Bamler’s original proof wherever his argument comes through. Let $\lambda(\alpha)=\nu(q\geq\alpha),\quad h(t)=\sup\\{\alpha:\lambda(\alpha)\geq t\\},$ where we have written $\nu(q\geq\alpha)=\nu(\\{q\geq\alpha\\})$, and the notations such as $\nu(q>\alpha)$, $\nu(q=\alpha)$, etc., shall also be defined accordingly. Obviously, $\lambda$ and $h$ are non-increasing and left continuous. For any measurable subset $\Omega\subset M$, we have, by Proposition 3.1, $\displaystyle\left|\int_{\Omega}q\,d\nu\right|=\left|\partial_{v}\int_{M}\chi_{\Omega}K(x,1\,|\cdot,0)\,dg_{0}\right|\leq F(\nu(\Omega)),$ (3.7) where $F(s)=\Phi^{\prime}(\Phi^{-1}(s)).$ The integration and differentiation are interchangeable by similar (and indeed, much simpler) arguments as in the appendix. By the definition of $h$, Bamler proved that $\nu(q\geq\alpha)=|h\geq\alpha|,\quad\nu(q>\alpha)=|h>\alpha|,\quad\nu(q=\alpha)=|h=\alpha|.$ It follows directly from Lemma 3.3 that $\int_{\\{h>\alpha\\}}\phi(h)=\int_{\\{q>\alpha\\}}\phi(q)\,d\nu,$ whenever $\phi|_{[\alpha,\infty)}$ is monotone and absolutely continuous on any compact intervals. Claim: Assume $f:\mathbb{R}\to\mathbb{R}$ is absolutely continuous on any compact interval and is monotone on both $[A,\infty)$ and $(-\infty,-A]$ for some $A\in\mathbb{R}_{+}.$ Note that we do not require $f$ to have the same monotonicity on $[A,\infty)$ and $(-\infty,-A]$. For any $\alpha\in\mathbb{R}\cup\\{\pm\infty\\}$ and any measurable subset $\Omega\subset M$ with $\\{q>\alpha\\}\subset\Omega\subset\\{q\geq\alpha\\},$ it holds that $\int_{\Omega}f(q)\,d\nu=\int_{0}^{\nu(\Omega)}f(h).$ (3.8) ###### Proof of the claim. The case where $\alpha=+\infty$ is obvious. We only need to consider the case where $\alpha\in\mathbb{R}$. This is because the case where $\alpha=-\infty$ follows from the case where $\alpha\in\mathbb{R}$ and the following observation: when $\alpha=-\infty$, we have $\Omega=M$ and $\int_{M}f(q)\,d\nu=\lim_{\alpha\to-\infty}\int_{\\{q>\alpha\\}}f(q)\,d\nu=\lim_{\alpha\to-\infty}\int_{\\{h>\alpha\\}}f(h)=\int_{0}^{1}f(h),$ where the first and the third equalities can be justified by applying the monotone convergence theorem to $f(q)$ and $f(h)$ using the monotonicity of $f|_{(-\infty,-A]}.$ Fix an arbitrary $\epsilon>0$. Without loss of generality, we may assume $A>\alpha$, for otherwise we may always enlarge $A$. Then $\\{q>A\\}\subset\\{q>\alpha\\}\subset\Omega$, and there is a partition $\alpha=b_{0}<b_{1}<\cdots<b_{m}=A$ such that ${\rm osc}_{[b_{i-1},b_{i}]}f\leq\epsilon/2$ for all $i=1,\cdots,m.$ As in [Bam20a], we have $\displaystyle\ \left|\int_{\Omega}f(q)\,d\nu-\int_{0}^{\nu(\Omega)}f(h)\right|$ (3.9) $\displaystyle\leq$ $\displaystyle\ \left|\int_{\Omega\cap\\{q=\alpha\\}}f(q)\,d\nu-\int_{[0,\nu(\Omega)]\cap\\{h=\alpha\\}}f(h)\right|+\sum_{i=1}^{m}\left|\int_{\\{b_{i-1}<q\leq b_{i}\\}}f(q)\,d\nu-\int_{\\{b_{i-1}<h\leq b_{i}\\}}f(h)\right|$ $\displaystyle+\left|\int_{\\{q>A\\}}f(q)\,d\nu-\int_{\\{h>A\\}}f(h)\right|$ $\displaystyle\leq$ $\displaystyle\,\epsilon,$ where we have applied Lemma 3.3 to conclude that the term on the third line of the above formula is indeed zero. ∎ By the same arguments as in [Bam20a], the claim above implies that for $a\in(0,1),$ $\displaystyle ah(a)$ $\displaystyle\leq\int_{0}^{a}h\leq F(a),$ $\displaystyle(1-a)h(a)$ $\displaystyle\geq\int_{a}^{1}h\geq-F(a).$ When $a\in(0,1/4),$ we also have $h(a)\leq F(a)/a\leq C(-\log a)^{1/2},\quad h(1-a)\geq-C(-\log a)^{1/2}.$ Then, applying the claim to $f(t)=|t|^{p}$ for any $1\leq p<\infty$, (3.5) can be proved using the same arguments as in [Bam20a, Proposition 4.2]. To prove (3.6), let us define $H(a):=\int_{0}^{a}h\leq F(a),$ and clearly $H(0)=0$. Applying the claim with $f(t)=t$ and $\alpha=-\infty$, we have $H(1)=\int_{0}^{1}h=\int_{M}q\,d\nu=0.$ From the definition of $H$, we have $H^{\prime\prime}\leq 0$ in the weak sense. Furthermore, we may argue as [Bam20a, Proposition 4.2] and obtain $\displaystyle\int_{0}^{1}h(a)^{2}da\leq\frac{1}{2}.$ (3.10) Note that the boundary terms produced by integration by parts in [Bam20a, Proposition 4.2] vanish because of the fact that $H^{\prime}(a)=h(a)\leq F(a)/a\leq C(-\log a)^{1/2},$ when $a\in(0,1/4).$ Applying the claim above with $f(t)=t^{2}$ and $\alpha=-\infty$, (3.6) follows from (3.10). ∎ ## 4 The Nash entropy based at varying points After having established Bamler’s gradient estimates on noncompact manifolds, we subsequently prove several results for the Nash entropy. Following [Bam20a], we use the notation $\displaystyle\mathcal{N}_{s}^{*}(x,t):=\mathcal{N}_{(x,t)}(t-s)$ (4.1) to manifest the dependence of the Nash entropy on its base point. Here $\mathcal{N}_{(x,t)}(t-s)$ is as defined in (1.3). Henceforth we will also define the time-dependent probability measure $\displaystyle\nu_{x,t}(s)(A)=\int_{A}K(x,t\,|\,\cdot,s)\,dg(s)$ for all measurable subset $A\subset M$. The $L^{1}$ Wassernstein distance is also a very important tool in [Bam20a]: ###### Definition 4.2. Let $(X,d)$ be a complete metric space and $\mu$, $\nu$ probability measures on $X$. Then the $L^{1}$ Wassernstein distance between $\mu$ and $\nu$ is defined as $\displaystyle d_{W_{1}}(\mu,\nu)=\sup_{f}\left(\int_{X}fd\mu-\int_{X}fd\nu\right),$ where the supremum is taken over all bounded $1$-Lipshcitz functions. The following observation made by Bamler is merely a consequence of Lemma 3.2. ###### Proposition 4.3 (Lemma 2.7 in [Bam20a]). Let $(M,g(t))_{t\in[0,T]}$, where $0<T<\infty$, be a Ricci flow with bounded curvature. Assume that $s$, $t$, $t_{1}$, and $t_{2}\in[0,T]$ satisfy $s<t\leq t_{1}$, $t_{2}$. Let $x_{1}$, $x_{2}\in M$. Then the following holds $\displaystyle d_{W_{1}}^{g(s)}(\nu_{x_{1},t_{1}}(s),\nu_{x_{2},t_{2}}(s))\leq d_{W_{1}}^{g(t)}(\nu_{x_{1},t_{1}}(t),\nu_{x_{2},t_{2}}(t)).$ Our generalization of Bamler’s gradient estimate (Proposition 3.1) also leads to the generalization of the following theorem and corollary in [Bam20a]. These results indicate that, fixing the evaluating time, the Nash entropy satisfies good properties with respect to its base point. ###### Theorem 4.4 (Theorem 5.9 in [Bam20a]). Let $(M,g(t))_{t\in[0,T]}$, where $0<T<\infty$, be a complete Ricci flow with bounded curvature. Let $s\in[0,T)$ and assume $R(\cdot,s)\geq R_{\operatorname{min}}$, where $R_{\operatorname{min}}$ is a real number. Then, on $M\times(s,T]$, it holds that $\displaystyle|\nabla\mathcal{N}_{s}^{*}|\leq\left(\frac{n}{2(t-s)}-R_{\operatorname{min}}\right)^{\frac{1}{2}},\ \ -\frac{n}{2(t-s)}\leq\left(\frac{\partial}{\partial t}-\Delta_{t}\right)\mathcal{N}_{s}^{*}\leq 0.$ ###### Proof. The proof of Theorem A.1 shows that the integration and the differentiation are always interchangeable in the computations of both $\nabla\mathcal{N}_{s}^{*}(\cdot,t)$ and $\Box_{x,t}\mathcal{N}_{s}^{*}(x,t)$. Hence, the proof follows after [Bam20a] line-by-line, using (3.6) above and Proposition 2.1 in [CMZ21]. ∎ As a direct consequence, we have the following Corollary. ###### Corollary 4.5 (Corollary 5.11 in [Bam20a]). In the same settings as the previous theorem, if $R(\cdot,t^{*})\geq R_{\operatorname{min}}$ and $s<t^{*}\leq t_{1}$, $t_{2}$, where $s$, $t^{*}$, $t_{1}$, and $t_{2}\in[0,T]$, then for $x_{1}$ and $x_{2}\in M$, we have $\displaystyle\mathcal{N}_{s}^{*}(x_{1},t_{1})-\mathcal{N}_{s}^{*}(x_{2},t_{2})\leq\left(\frac{n}{2(t^{*}-s)}-R_{\operatorname{min}}\right)^{\frac{1}{2}}d_{W_{1}}^{g(t^{*})}(\nu_{x_{1},t_{1}}(t^{*}),\nu_{x_{2},t_{2}}(t^{*}))+\frac{n}{2}\log\left(\frac{t_{2}-s}{t^{*}-s}\right).$ We now apply Corollary 4.5 to an ancient solution. ###### Proposition 4.6. Let $(M,g(t))_{t\in(-\infty,0]}$ be an ancient solution with bounded curvature within each compact time interval. Then for any $(x_{1},t_{1})$ and $(x_{2},t_{2})\in M\times(-\infty,0]$, it holds that $\displaystyle\lim_{\tau\rightarrow\infty}\mathcal{W}_{(x_{1},t_{1})}(\tau)=\lim_{\tau\rightarrow\infty}\mathcal{N}_{(x_{1},t_{1})}(\tau)=\lim_{\tau\rightarrow\infty}\mathcal{N}_{(x_{2},t_{2})}(\tau)=\lim_{\tau\rightarrow\infty}\mathcal{W}_{(x_{2},t_{2})}(\tau).$ (4.7) ###### Proof. We first prove the second equality of (4.7). Observe that this is equivalent to $\displaystyle\lim_{s\rightarrow-\infty}\big{(}\mathcal{N}_{s}^{*}(x_{1},t_{1})-\mathcal{N}_{s}^{*}(x_{2},t_{2})\big{)}=0.$ (4.8) Without loss of generality, we may assume $t_{1}\leq t_{2}$. Let $\varepsilon>0$ be arbitrarily fixed, we then apply Corollary 4.5 with $s\ll-1$ and $t^{*}=\varepsilon s\leq t_{1}$. Since we can take $R_{\text{min}}=0$ by [CBl], we have $\displaystyle\mathcal{N}_{s}^{*}(x_{1},t_{1})-\mathcal{N}_{s}^{*}(x_{2},t_{2})$ $\displaystyle\leq$ $\displaystyle\left(\frac{n}{2(1-\varepsilon)|s|}\right)^{\frac{1}{2}}d_{W_{1}}^{g(t^{*})}(\nu_{x_{1},t_{1}}(t^{*}),\nu_{x_{2},t_{2}}(t^{*}))$ $\displaystyle+\frac{n}{2}\log\left(\frac{t_{2}-s}{t^{*}-s}\right)$ $\displaystyle\leq$ $\displaystyle\left(\frac{n}{2(1-\varepsilon)|s|}\right)^{\frac{1}{2}}d_{W_{1}}^{g(t_{1})}(\delta_{x_{1}},\nu_{x_{2},t_{2}}(t_{1}))$ $\displaystyle+\frac{n}{2}\log\left(\frac{t_{2}-s}{(\varepsilon-1)s}\right),$ where we have used Proposition 4.3. By the definition of the $L^{1}$ Warssenstein distance, we have that $d_{W_{1}}^{g(t_{1})}(\delta_{x_{1}},\nu_{x_{2},t_{2}}(t_{1}))\leq\int_{M}{\rm dist}_{t_{1}}(x_{1},\cdot)d\nu_{x_{2},t_{2}}(t_{1})\leq C\sum_{j=1}^{\infty}j\exp(-cj^{2})<\infty,$ where the second inequality is due to the Gaussian concentration theorem (Proposition 2.16; see also Proposition 2.2 in [CMZ21] under the current assumption). Taking $s\rightarrow-\infty$ on both sides of (4), we obtain $\displaystyle\lim_{s\rightarrow-\infty}\big{(}\mathcal{N}_{s}^{*}(x_{1},t_{1})-\mathcal{N}_{s}^{*}(x_{2},t_{2})\big{)}\leq\frac{n}{2}\log\left(\frac{1}{1-\varepsilon}\right).$ Since $\varepsilon>0$ is arbitrary, we then have $\displaystyle\lim_{s\rightarrow-\infty}\big{(}\mathcal{N}_{s}^{*}(x_{1},t_{1})-\mathcal{N}_{s}^{*}(x_{2},t_{2})\big{)}\leq 0.$ Reversing the order of $(x_{1},t_{1})$ and $(x_{2},t_{2})$, we obtain (4.8). (4.7) is proved in [ZY1, Corollary 4.5]; we include its proof here. Fix an arbitrary $\varepsilon>0$, then (1.5) implies $\displaystyle\mathcal{W}_{(x,t)}(\tau)\leq\mathcal{N}_{(x,t)}(\tau)=\frac{1}{\tau}\int_{0}^{\tau}\mathcal{W}_{(x,t)}(\eta)d\eta\leq\frac{1}{\tau}\int_{\varepsilon\tau}^{\tau}\mathcal{W}_{(x,t)}(\eta)d\eta\leq(1-\varepsilon)\mathcal{W}_{(x,t)}(\varepsilon\tau).$ By first taking $\tau\rightarrow\infty$ and then $\varepsilon\rightarrow 0$, we obtain $\displaystyle\lim_{\tau\rightarrow\infty}\mathcal{W}_{(x,t)}(\tau)=\lim_{\tau\rightarrow\infty}\mathcal{N}_{(x,t)}(\tau).$ ∎ Finally, we are ready to prove Theorem 1.9. ###### Proof of Theorem 1.9. Let $(M,g(t))_{t\in(-\infty,0]}$ be an ancient solution satisfying the conditions of this theorem. The sufficiency direction follows immediately from Proposition 3.3 in [ZY1], namely, if $(M,g(t))_{t\in(-\infty,0]}$ has a finite entropy bound, then it is $\kappa$-noncollapsed on all scales, where $\kappa>0$ depends on the entropy bound. On the other hand, if $(M,g(t))_{t\in(-\infty,0]}$ is $\kappa$-noncollapsed on all scales, where $\kappa>0$, then, by Proposition (2.4), $(M,g(t))_{t\in(-\infty,0]}$ has an asymptotic shrinker. In combination with Proposition 4.6, we have that Perelman’s entropy defined on $(M,g(t))_{t\in(-\infty,0]}$ with arbitrary base point must have the same lower bound—the entropy on the asymptotic shrinker. This finishes the proof. ∎ ###### Proof of Corollary 1.11. We consider only case (2). Let $(x_{0},t_{0})$ be an arbitrary point in $M\times(-\infty,0]$ and $\ell$ the reduced distance based at $(x_{0},t_{0})$. Assumption (2) in Theorem 1.9 is implied by Hamilton’s trace Harnack as mentioned in Remarks below Theorem 1.9. Then, Corollary 1.11 is a combination of Proposition 2.4 and Proposition 4.6. ∎ ## 5 Applications to steady solitons In this section, we prove Theorem 1.13 together with some other results concerning steady solitons with nonnegative Ricci curvature. Let $(M^{n},g,f)$ be a complete steady gradient Ricci soliton with nonnegative Ricci curvature, normalized so that $\operatorname{Ric}=\nabla^{2}f,\quad R+|\nabla f|^{2}=1.$ Throughout this section, we will consider its _canonical form_ $g(\tau)$; see section 1 for the definition. Note that $\tau$ stands for the backward time. We first observe that the Ricci nonnegativity condition implies Hamilton’s trace Harnack inequality, which in turn implies item (2) in Theorem 1.9 according to the remarks thereof. After establishing the following lemma, Theorem 1.13 follows immediately from Theorem 1.9. ###### Lemma 5.1. Let $(M^{n},g,f)$ be a complete steady gradient Ricci soliton with nonnegative Ricci curvature and let $(M,g(\tau))_{\tau\in[0,\infty)}$ be its canonical form. Then Hamilton’s trace Harnack inequality (1.10) holds. ###### Proof. Since $g(\tau)=\Phi_{\tau}^{*}g$ moves only by diffeomorphisms, it suffices to verify for the metric $g=g(0).$ Recall that $\nabla R=-2\operatorname{Ric}(\nabla f,\cdot)$ on steady gradient Ricci solitons; see, for example, (1.27) in [CCGG+07]. (Note that we use a different sign convention on $f$ here.) Let $X$ be an arbitrary vector field on $M.$ We can then simplify the expression of the trace Harnack quantity as: $\displaystyle-\left.\frac{\partial R}{\partial\tau}\right|_{\tau=0}-2\langle X,\nabla R\rangle+2\operatorname{Ric}(X,X)$ $\displaystyle=$ $\displaystyle\ 2\operatorname{Ric}(\nabla f,\nabla f)+4\operatorname{Ric}(X,\nabla f)+2\operatorname{Ric}(X,X)$ $\displaystyle=$ $\displaystyle\ 2\operatorname{Ric}(X+\nabla f,X+\nabla f)\geq 0,$ where we have applied the assumption $\operatorname{Ric}\geq 0.$ ∎ Under the assumption of this section, the AVR (asymptotic volume ratio) is well defined, and hence the existence of a non-flat asymptotic shrinker implies zero AVR as observed in [Ni05], which in turn implies infinite ASCR (asymptotic scalar curvature ratio) as observed in [DZ18]. Recall that for a Riemannian manifold $(M,g)$ with nonnegative Ricci curvature, we have ${\rm AVR}(g):=\lim_{r\to\infty}\frac{{\rm Vol}(B_{r}(p))}{r^{n}},\quad{\rm ASCR}(g):=\limsup_{x\to\infty}R(x){\rm dist}^{2}(x,p).$ One can easily show that the two definitions do not depend on the choice of the base point $p\in M.$ ###### Corollary 5.2. Let $(M,g,f)$ be a complete and $\kappa$-noncollasped steady gradient Ricci soliton with nonnegative Ricci curvature. Assume $|{\operatorname{Rm}}|\leq CR$ for some constant $C.$ If $g$ is not flat, then ${\rm AVR}(g)=0$ and ${\rm ASCR}(g)=\infty.$ ###### Proof. Suppose to the contrary that ${\rm AVR}(g)=c>0.$ Let $g(\tau)=\Phi_{\tau}^{*}g$ be the canonical form of $(M,g,f)$, where $\Phi_{\tau}$ is the 1-parameter family of diffeomorphisms generated by $\nabla f.$ Then, by the Bishop-Gromov comparison theorem and the self- similarity of $g(\tau)$, we have $\displaystyle\operatorname{Vol}B_{g(\tau)}(x,r)\geq cr^{n}$ (5.3) for all $(x,\tau)\in M\times[0,\infty)$ and for all $r>0$. Let $\ell$ be the reduced distance based at some $(p,0).$ Let $\tau_{i}\nearrow\infty$ and $x_{i}\in M$ be such that $\ell(x_{i},\tau_{i})\leq n/2$. By Proposition 2.4, after passing to a subsequence, we have that $(M,g_{i}(\tau),(x_{i},1),\ell_{i})\to(M_{\infty},g_{\infty}(\tau),(x_{\infty},1),\ell_{\infty}),$ in the sense as indicated in that proposition, where $g_{i}(\tau)=\tau_{i}^{-1}g(\tau_{i}\tau)$ and $\ell_{i}(\cdot,\tau)=\ell(\cdot,\tau_{i}\tau).$ Here $(M_{\infty},g_{\infty},\ell_{\infty})$ is the canonical form of a non-flat Ricci shrinker. Obviously, (5.3) holds for every $g_{i}(\tau)$, and consequently it also holds for the asymptotic shrinker. It follows that $\operatorname{AVR}(g_{\infty})\geq c>0$, which is a contradiction to the fact that non-flat shrinking gradient Ricci solitons with nonnegative Ricci curvature must have zero AVR (see [CaN09, Corollary 1.1]). Hence ${\rm AVR}(g)=0.$ To prove ${\rm ASCR}(g)=\infty$, we can argue as [DZ18, Proposition 2.4]. Suppose that ${\rm ASCR}(g)<\infty.$ Write $\rho(x):={\rm dist}(p,x)$ for some base point $p\in M$. Then there are constants $A>1$ and $r_{0}>0$ such that $R(x)\leq\frac{A}{\rho^{2}(x)}\quad\text{ whenever }\quad\rho(x)\geq r_{0}.$ Without loss of generality, we assume that the constant $C$ in (2.1) is no less than $1$. For large $r>r_{0},$ pick $y\in\partial B(p,2\sqrt{AC}r)$. Then we have $B(y,r)\subset B(p,3\sqrt{AC}r)\setminus B(p,\sqrt{AC}r).$ For any $x\in B(y,r),$ we have $|{\operatorname{Rm}}|(x)\leq CR(x)\leq\frac{AC}{\rho^{2}(x)}\leq\frac{AC}{ACr^{2}}=r^{-2}.$ Since $g$ is $\kappa$-noncollapsed, we have $\frac{{\rm Vol}[B(p,3\sqrt{AC}r)]}{(3\sqrt{AC}r)^{n}}\geq\frac{{\rm Vol}[B(y,r)]}{(3\sqrt{AC}r)^{n}}\geq\frac{\kappa}{(3\sqrt{AC})^{n}}.$ Taking $r\to\infty$, we obtain ${\rm AVR}(g)>0,$ which is a contradiction to what we just proved. Hence ${\rm ASCR}(g)=\infty.$ ∎ The above result extends [CLN06, Theorem 9.44], while the latter proved ${\rm ASCR}=\infty$ assuming ${\rm sec}\geq 0,\operatorname{Ric}>0$, and that $R$ attains its maximum somewhere. Here, apart from $|{\operatorname{Rm}}|\leq CR$, we assume $\kappa$-noncollapsing, a condition which holds for singularity models. We also generalized the previous results in [CDM20, Theorem 1.10] to higher dimensions assuming nonnegative Ricci curvature from a different approach. Finally, we remark that P.-Y. Chan [Cha20] recently proved that for any $4$-dimensional steady gradient Ricci soliton which is a singularity model, we must have that $|{\operatorname{Rm}}|\leq CR$ for some constant $C$ (without assuming a curvature decaying condition as in [Cha19]). ## Appendix A Interchangeablility of integration and differentiation In the proofs of Proposition 3.4 and Theorem 4.4, it must be applied that the integration and the differentiation are interchangeable. In Bamler’s [Bam20a] original proof, this is obviously valid since the Ricci flow in question is on a closed manifold. However, under our assumption, we cannot make the same conclusion so easily. Nonetheless, given the curvature boundedness of the Ricci flow, the estimates for the heat kernel have made it possible for us to prove the interchangeability of the integration and the differentiation. We shall only prove the following theorem, and all other similar arguments required in the proofs of Proposition 3.4 and Theorem 4.4 can be proved with similar (and indeed, much easier) method. ###### Theorem A.1. Let $(M^{n},g(t))_{t\in I}$ be a complete Ricci flow with bounded curvature within each time interval compact in $I$. Then, for all $s,t\in I$ with $s<t$, we have $\displaystyle\Box_{x,t}\mathcal{N}_{s}^{*}(x,t)=-\int_{M}\Box_{x,t}\big{(}K(x,t\,|\,\cdot,0)\log K(x,t\,|\,\cdot,s)\big{)}dg_{s}-\frac{n}{2t},$ where, as before, we have defined $\mathcal{N}^{*}_{s}(x,t):=\mathcal{N}_{(x,t)}(t-s)$. By a parabolic scaling, we shall assume that $s=0$ and $t=1$. We shall henceforth fix a point $x\in M$, a small convex open neighborhood $x\in U\subset B_{g_{0}}(x,1)$, and a small positive number $\varepsilon\ll 1$. where $g_{0}:=g(0)$ will be used as the reference metric throughout the proof. Indeed, we need only to show that $\displaystyle y\rightarrow$ $\displaystyle\sup_{z\in U}\left|\nabla_{z}\big{(}K(z,1\,|\,y,0)\log K(z,1\,|\,y,0)\big{)}\right|,$ $\displaystyle y\rightarrow$ $\displaystyle\sup_{z\in U}\left|\nabla^{2}_{z}\big{(}K(z,1\,|\,y,0)\log K(z,1\,|\,y,0)\big{)}\right|,$ $\displaystyle y\rightarrow$ $\displaystyle\sup_{t\in(1-\varepsilon,1+\varepsilon)}\left|\partial_{t}\big{(}K(x,t\,|\,y,0)\log K(x,t\,|\,y,0)\big{)}\right|,$ are all dominated by integrable functions, and the conclusion of the theorem follows from Lebesgue’s dominated convergence theorem. ###### Lemma A.2. There are constants $C$, depending on the curvature bound on $M\times[0,1]$ and ${\rm Vol}_{g_{0}}B_{0}(x,1)$, such that $\sup_{z\in U}\left|\nabla_{z}\big{(}K(z,1\,|\,y,0)\log K(z,t\,|\,y,0)\big{)}\right|\leq C\exp\left(-C^{-1}{\rm dist}^{2}_{0}(x,y)\right),$ for all $y\in M$. ###### Proof. Let us first of all fix an arbitrary $y\in M$. By Lemma 26.17 in [CCGG+10], we have $\displaystyle K(\cdot,\cdot\,|\,y,0)\leq\frac{C}{{\rm Vol}_{g_{0}}B_{0}\left(y,\tfrac{\sqrt{t}}{2}\right)}\leq\frac{C}{t^{\frac{n}{2}}{\rm Vol}_{g_{0}}B_{0}(y,1)}\leq\frac{C\exp(C{\rm dist}_{0}(x,y))}{t^{\frac{n}{2}}}\quad\text{ on }\quad M\times(0,1],$ where we have applied the Bishop-Gromov comparison theorem. Then, applying [BCP10, ZQ06](c.f. Lemma 2.4 in [ZY1]) on $M\times[\tfrac{1}{4},1]$, we have $\displaystyle\big{|}\nabla_{z}K(z,t\,|\,y,0)\big{|}$ $\displaystyle\leq$ $\displaystyle\frac{1}{\sqrt{t-\tfrac{1}{4}}}\cdot K(z,t\,|\,y,0)\cdot\sqrt{\log\left(\frac{C\exp(C{\rm dist}_{0}(x,y))}{K(z,t\,|\,y,0)}\right)}$ $\displaystyle\leq$ $\displaystyle CK(z,t\,|\,y,0)\sqrt{\log\left(\frac{C\exp(C{\rm dist}_{0}(x,y))}{K(z,t\,|\,y,0)}\right)},$ for all $(z,t)\in M\times[\tfrac{1}{2},1]$. If we restrict $(z,t)\in B_{0}(x,2)\times[\tfrac{1}{2},1]$, then, by Theorem 26.31 in [CCGG+10], we have $\displaystyle K(z,t\,|\,y,0)\geq C^{-1}e^{-C{\rm dist}^{2}_{0}(y,z)}\geq C^{-1}e^{-C{\rm dist}^{2}_{0}(x,y)},$ (A.4) and (A) becomes $\displaystyle\big{|}\nabla_{z}K(z,t\,|\,y,0)\big{|}\leq CK(z,t\,|\,y,0)\left(C+C{\rm dist}^{2}_{0}(x,y)\right)$ (A.5) for all $(z,t)\in B_{0}(x,2)\times[\tfrac{1}{2},1]$. This further implies that, if $z\in U\subset B_{0}(x,1)$, then we have $\displaystyle\,\left|\nabla_{z}\big{(}K(z,1\,|\,y,0)\log K(z,t\,|\,y,0)\big{)}\right|$ (A.6) $\displaystyle\leq$ $\displaystyle\ |\nabla_{z}K(z,1\,|\,y,0)|\big{(}1+|\log K(z,1\,|\,y,0)|\big{)}$ $\displaystyle\leq$ $\displaystyle\ C(C+C{\rm dist}^{2}_{0}(x,y))\cdot\big{(}K(z,1\,|\,y,0)+K^{\frac{1}{2}}(z,1\,|\,y,0)+K^{2}(z,1\,|\,y,0)\big{)},$ where we have applied the fact the fact that $|u\log u|\leq u^{\frac{1}{2}}+u^{2}$ for all $u>0$. Finally, by Corollary 2.26 in [CCGG+10], for all $(z,t)\in B_{0}(x,2)\times[\tfrac{1}{2},1]$, it holds that $\displaystyle K(z,t\,|\,y,0)\leq Ce^{-C^{-1}{\rm dist}^{2}_{0}(y,z)}\leq Ce^{-C^{-1}{\rm dist}^{2}_{0}(x,y)},$ (A.7) where $C$ depends on the curvature bound and $\displaystyle\inf_{z\in B_{0}(x,2)}{\rm Vol}_{g_{0}}B_{0}(z,1)$, which in turn depends only on the curvature bound and ${\rm Vol}_{g_{0}}B_{0}(x,1)$ by the Bishop-Gromov theorem. The lemma then follows from combining (A.6) and (A.7). ∎ ###### Lemma A.8. There are constants $C$, depending on the curvature bound on $M\times[0,1]$ and ${\rm Vol}_{g_{0}}B_{0}(x,1)$, such that $\sup_{z\in U}\left|\nabla^{2}_{z}\big{(}K(z,1\,|\,y,0)\log K(z,t\,|\,y,0)\big{)}\right|\leq C\exp\left(-C^{-1}{\rm dist}^{2}_{0}(x,y)\right),$ for all $y\in M$. ###### Proof. Let us fix an arbitrary $y\in M$. Combining (A) and (A.7), we have $\displaystyle\big{|}\nabla_{z}K(z,t\,|\,y,0)\big{|}\leq C\exp\left(-C^{-1}{\rm dist}^{2}_{0}(x,y)\right)\quad\text{ for all }\quad(z,t)\in B_{0}(x,2)\times[\tfrac{1}{2},1].$ (A.9) We shall then consider the function $u(z,t):=K(z,t\,|\,y,0)$, which is a solution to the heat equation. Indeed, since $\left(\frac{\partial}{\partial t}-\Delta_{L}\right)\nabla^{2}u=0,$ where $\Delta_{L}$ is the Lichnerowicz Laplacian operator, we have $\displaystyle\Box|\nabla^{2}u|^{2}$ $\displaystyle=$ $\displaystyle-2|\nabla^{3}u|+4\operatorname{Rm}(\nabla^{2}u,\nabla^{2}u)$ $\displaystyle\leq$ $\displaystyle-2|\nabla^{3}u|^{2}+C|\nabla^{2}u|^{2},$ and $C$ depends on the curvature bound. On the other hand, we have $\Box|\nabla u|^{2}=-2|\nabla^{2}u|^{2}.$ We may then apply Shi’s gradient estimate (c.f. Theorem 14.10 in [CCGG+10], using the cut-off function constructed by Lemma 14.3 therein) on $B_{0}(x,2)\times[\tfrac{1}{2},1]$ and obtain that $\displaystyle\big{|}\nabla^{2}_{z}K(z,1\,|\,y,0)\big{|}\leq C\exp\left(-C^{-1}{\rm dist}^{2}_{0}(x,y)\right)\quad\text{ for all }\quad z\in B_{0}(x,1).$ (A.10) Since, for all $z\in B_{0}(x,1)$, we have $\displaystyle\,\left|\nabla^{2}_{z}\big{(}K(z,1\,|\,y,0)\log K(z,1\,|\,y,0)\big{)}\right|$ (A.11) $\displaystyle\leq$ $\displaystyle\ |\nabla^{2}K(z,1\,|\,y,0)|(|\log K(z,1\,|\,y,0)|+1)+\frac{\left|\nabla_{z}K(z,1\,|\,y,0)\right|^{2}}{K(z,1\,|\,y,0)}$ $\displaystyle\leq$ $\displaystyle\ |\nabla^{2}K(z,1\,|\,y,0)|(|\log K(z,1\,|\,y,0)|+1)+C\exp\left(-C^{-1}{\rm dist}^{2}_{0}(x,y)\right),$ where in the last inequality, we have applied a consequence of (A.5) and (A.7): $\displaystyle\frac{\left|\nabla_{z}K(z,1\,|\,y,0)\right|^{2}}{K(z,1\,|\,y,0)}$ $\displaystyle=$ $\displaystyle\left(\frac{\left|\nabla_{z}K(z,1\,|\,y,0)\right|}{K(z,1\,|\,y,0)}\right)^{2}\cdot K(z,1\,|\,y,0)$ $\displaystyle\leq$ $\displaystyle C(1+{\rm dist}^{2}_{0}(x,y))^{2}\exp\left(-C^{-1}{\rm dist}^{2}_{0}(x,y)\right)$ $\displaystyle\leq$ $\displaystyle C\exp\left(-C^{-1}{\rm dist}^{2}_{0}(x,y)\right),$ for all $z\in B_{0}(x,1)$, the lemma then follows from (A.11) together with (A.4), (A.7), and (A.10). ∎ ###### Lemma A.12. There are constants $C$, depending on the curvature bound on $M\times[0,1]$ and ${\rm Vol}_{g_{0}}B_{0}(x,1)$, such that $\sup_{t\in(1-\varepsilon,1+\varepsilon)}\left|\partial_{t}\big{(}K(x,t\,|\,y,0)\log K(x,t\,|\,y,0)\big{)}\right|\leq C\exp\left(-C^{-1}{\rm dist}^{2}_{0}(x,y)\right),$ for all $y\in M$, where $\varepsilon$ is a small positive constant, which could be taken to be, say $100^{-1}$. ###### Proof. Since $\displaystyle\partial_{t}\big{(}K(x,t\,|\,y,0)\log K(x,t\,|\,y,0)\big{)}=\big{(}\log K(x,t\,|\,y,0)+1\big{)}\cdot\Delta_{x}K(x,t\,|\,y,0),$ the proof is not essentially different from the above lemma. ∎ ###### Proof of Theorem A.1. Let $v\in T_{x}M$ be a unit vector with respect to $g(1)$, and let $\gamma(s)$ be a unit speed $g(1)$-geodesic emanating from $x$ with $\gamma^{\prime}(0)=v$. Letting $u_{z}(y):=K(z,1\,|\,y,0)\log K(z,1\,|\,y,0)$, we have $\displaystyle\left|\frac{1}{s}\big{(}u_{\gamma(s)}(y)-u_{x}(y)\big{)}\right|\leq\frac{1}{s}\int_{0}^{s}\sup_{z\in B_{0}(x,1)}|\nabla_{z}u_{z}(y)|ds\leq C\exp\left(-C^{-1}{\rm dist}^{2}_{0}(x,y)\right),$ for all $s$ small enough, where we have applied (A.2), and the right-hand-side is obviously integrable in $y$ because of the curvature boundedness assumption. Hence, by Lebesgue’s dominated convergence theorem, we have $\displaystyle\nabla_{v}\mathcal{N}^{*}_{0}(x,1)$ $\displaystyle=$ $\displaystyle\lim_{s\rightarrow 0+}\frac{1}{s}\big{(}\mathcal{N}^{*}_{0}(\gamma(s),1)-\mathcal{N}^{*}_{0}(x,1)\big{)}$ $\displaystyle=$ $\displaystyle\lim_{s\rightarrow 0+}\int_{M}\frac{1}{s}\big{(}u_{\gamma(s)}(y)-u_{x}(y)\big{)}dg_{0}(y)$ $\displaystyle=$ $\displaystyle\int_{M}\lim_{s\rightarrow 0+}\frac{1}{s}\big{(}u_{\gamma(s)}(y)-u_{x}(y)\big{)}dg_{0}(y)$ $\displaystyle=$ $\displaystyle\int_{M}\nabla_{v}\big{(}K(x,1\,|\,y,0)\log K(x,t\,|\,y,0)\big{)}dg_{0}(y).$ We have proved that the first spatial derivative is interchangeable with the integration. In like manner, with the help of Lemma A.8 and Lemma A.12, one may also verify that both the second spatial derivative and the first time derivative are interchangeable with the integration. Once this is done, the theorem follows from Bamler’s original proof of Theorem 5.9 in [Bam20a]. ∎ ## References * [Bam20a] Richard H. Bamler, _Entropy and heat kernel bounds on a Ricci flow background_ , https://arxiv.org/abs/2008.07093 (2020). * [Bam20b] , _Compactness theory of the space of super Ricci flows_ , https://arxiv.org/abs/2008.09298 (2020). * [Bam20c] , _Structure theory of non-collapsed limits of Ricci flows_ , https://arxiv.org/abs/2009.03243 (2020). * [BCP10] Mihai Bailesteanu, Xiaodong Cao, and Artem Pulemotov. ”Gradient estimates for the heat equation under the Ricci flow.” Journal of Functional Analysis 258.10 (2010): 3517-3542. * [Br09] S. Brendle. _A generalization of Hamilton’s differential Harnack inequality for the Ricci flow_. J. Differential Geom. 2009, 82(1):207-227. * [CZ11] Xiaodong Cao and Qi S Zhang, _The conjugate heat equation and ancient solutions of the Ricci flow_. Advances in Mathematics, 2011, 228(5): 2891-2919. * [CHI04] Huai-Dong Cao, Richard S. Hamilton, and Tom Ilmanen. _Gaussian densities and stability for some Ricci solitons_ , arXiv preprint math/0404165, 2004. * [CaN09] José Carrillo and Lei Ni, _Sharp logarithmic Sobolev inequalities on gradient solitons and applications_ , Comm. Anal. Geom. 17 (2009), 721–753. * [Cha19] Pak-Yeung Chan, Curvature estimates for steady gradient Ricci solitons, Trans. Amer. Math. Soc. 372 (2019), no. 12, 8985–9008. * [Cha20] , in preparation. * [CMZ21] Pak-Yeung Chan, Zilu Ma, and Yongjia Zhang, _Ancient Ricci flows with asymptotic solitons._ arXiv: 2106.06904. * [CTY] Albert Chau, Luen-Fai Tam, and Chengjie Yu, _Pseudolocality for the Ricci flow and applications_ , arXiv preprint math/0701153, 2007. * [CBl] Bing-Long Chen. _Strong uniqueness of the Ricci flow_ , Journal of Differential Geometry, 82(2):363-382, 2009. * [CDM20] Bennett Chow, Yuxing Deng, and Zilu Ma. _On four-dimensional steady gradient Ricci solitons that dimension reduce_ , https://arxiv.org/abs/arXiv:2009.11456 (2020). * [CCGG+07] Chow, B.; Chu, S.; Glickenstein, D.; Guenther, C.; Isenberg, J.; Ivey, T.; Knopf, D.; Lu, P.; Luo, F.; Ni, L. _The Ricci flow: techniques and applications. Part I. Geometric Aspects_ , Mathematical Surveys and Monographs, vol. 135, AMS, Providence, RI, 2008. * [CCGG+08] , _The Ricci flow: techniques and applications. Part II. Analytic Aspects_ , Mathematical Surveys and Monographs, vol. 144, AMS, Providence, RI, 2008. * [CCGG+10] , The Ricci flow: techniques and applications. Part III: Geometric-Analytic Aspects. Mathematical Surveys and Monographs, 163, AMS, Providence, RI, 2010. * [CLN06] Bennett Chow, Peng Lu, and Lei Ni, _Hamilton’s Ricci flow_ , Lectures in Contemporary Mathematics, 3, Science Press and Graduate Studies in Mathematics, 77, American Mathematical Society (co-publication), 2006. * [DZ18] Yuxing Deng and Xiaohua Zhu, _Asymptotic behavior of positively curved steady Ricci solitons_ , Trans. Amer. Math. Soc. 370 (2018), 2855-2877. * [Ha1] Richard S. Hamilton, _Harnack estimate for the mean curvature flow_ , J. Differential Geom. Volume 41, Number 1 (1995), 215-226. * [Ha2] , _A Compactness Property for Solutions of the Ricci Flow_ , American Journal of Mathematics, 1995, 117(3): 545-572 * [HN14] Hans‐Joachim Hein and Aaron Naber. _New Logarithmic Sobolev Inequalities and an $\varepsilon$‐Regularity Theorem for the Ricci Flow,_ Communications on Pure and Applied Mathematics 67, no. 9 (2014), 1543-1561. * [KP82] Leon Karp and Peter Li, _The heat equation on complete riemannian manifolds._ Unpublished notes, 1982. * [KL08] Bruce Kleiner and John Lott, _Notes on Perelman’s papers_ , Geom. Topol.12 (2008), no. 5, 2587–2855. * [Ko07] Brett Kotschwar, _Hamilton’s gradient estimate for the heat kernel on complete manifolds_ , Proceedings of the American Mathematical Society 135, no. 9 (2007), 3013-3019. * [N10] Aaron Naber, _Noncompact shrinking four solitons with nonnegative curvature_ , Journal für die reine und angewandte Mathematik (Crelles Journal), 2010, 645: 125–153. * [Ni05] Lei Ni, _Ancient solutions to Kähler-Ricci flow_ , Math. Res. Lett. 12 (2005), 633-654. * [Per02] Grisha Perelman, _The entropy formula for the Ricci flow and its geometric applications_ , arXiv:math.DG/0211159. * [Per03] , _Ricci flow with surgery on three-manifolds_ , arXiv:math/0303109. * [Ru] Walter Rudin, _Real and complex analysis._ Tata McGraw-hill education, 2006. * [Sh] Wan-Xiong Shi, _Deforming the metric on complete Riemannian manifolds._ J. Differential Geom. 1989, 30(1):223-301. * [Xu17] Guoyi Xu, _An equation linking W-entropy with reduced volume_ , Journal fur die reine und angewandte Mathematik, Volume 2017: Issue 727. * [Y] Rugang Ye, _On the $l$-Function and the Reduced Volume of Perelman I_, Trans. Amer. Math. Soc. 360 (2008), 507–531 * [ZQ06] Qi S Zhang, _Some gradient estimates for the heat equation on domains and for an equation by Perelman._ International Mathematics Research Notices 2006 (2006), 1-39. * [ZQ10] , _Sobolev inequalities, heat kernels under Ricci flow, and the Poincaré conjecture._ CRC Press, 2010. * [ZY1] Yongjia Zhang, _Entropy, noncollapsing, and a gap theorem for ancient solutions to the Ricci flow._ Communications in Analysis and Geometry, _to appear_. * [ZY2] , _On the equivalence between noncollapsing and bounded entropy for ancient solutions to the Ricci flow._ Journal fur die reine und angewandte Mathematik, Volume 2020, Issue 762, Pages 35. Department of Mathematics, University of California, San Diego, CA, 92093 E-mail address: `[email protected]` School of Mathematics, University of Minnesota, Twin Cities, MN, 55414 E-mail address: `[email protected]`
16k
arxiv_papers
2101.01235
# An integrated abundance model for estimating county-level prevalence of opioid misuse in Ohio Staci A. Hepler111Authors contributed equally Department of Mathematics and Statistics, Wake Forest University, Winston-Salem, USA [email protected] David M. Kline111Authors contributed equally Department of Biostatistics and Data Science, Wake Forest School of Medicine, Winston-Salem, USA [email protected] Andrea Bonny Division of Adolescent Medicine, Nationwide Children’s Hospital, Department of Pediatrics, The Ohio State University, Columbus, USA Erin McKnight Division of Adolescent Medicine, Nationwide Children’s Hospital, Department of Pediatrics, The Ohio State University, Columbus, USA Lance A. Waller Department of Biostatistics and Bioinformatics, Emory University, Atlanta, USA ###### Abstract Opioid misuse is a national epidemic and a significant drug related threat to the United States. While the scale of the problem is undeniable, estimates of the local prevalence of opioid misuse are lacking, despite their importance to policy-making and resource allocation. This is due, in part, to the challenge of directly measuring opioid misuse at a local level. In this paper, we develop a Bayesian hierarchical spatio-temporal abundance model that integrates indirect county-level data on opioid-related outcomes with state- level survey estimates on prevalence of opioid misuse to estimate the latent county-level prevalence and counts of people who misuse opioids. A simulation study shows that our integrated model accurately recovers the latent counts and prevalence. We apply our model to county-level surveillance data on opioid overdose deaths and treatment admissions from the state of Ohio. Our proposed framework can be applied to other applications of small area estimation for hard to reach populations, which is a common occurrence with many health conditions such as those related to illicit behaviors. ###### keywords: Disease mapping; Downscaling; Hierarchical; Opioid epidemic; Small area estimation; Surveillance ## 1 Introduction The opioid epidemic in the United States is a public health crisis (Office of National Drug Control Policy Executive, Office of the President of the United States, 2011; Drug Enforcement Agency, 2016) associated with unprecedented morbidity and mortality (Rudd et al., 2016; Zibbell et al., 2015). In the 12 month period ending in April 2021, drug overdose deaths in the United States exceeded 100,000, representing a 30% increase from the prior year (CDC, 2021). The opioid epidemic has been particularly severe in Ohio. In 2019, Ohio had the fourth highest overdose death rate of 38.3 per 100,000 which was nearly double the national rate of 21.6 per 100,000 (Division of Unintentional Injury Prevention, 2021). In addition to the epidemic of overdose death, opioid misuse puts Ohio at risk of epidemic levels of Hepatitis C and HIV (Lerner and Fauci, 2019). Knowledge of local prevalence of people who misuse opioids (PWMO) is imperative to quantifying the magnitude of the opioid epidemic. However, this quantity is an unobservable, dynamic subset of the population, and the lack of local estimates of PWMO is a significant barrier to the public health response to the opioid crisis (Schuler et al., 2020). Despite its importance for guiding a public health response, surveillance of behaviorally-linked conditions, like substance use, is challenging. These conditions tend to vary at the local level and rely on self-reported information, which may not be reliable when dealing with an illicit behavior. This limits the utility of large surveys because they are not typically designed to provide local estimates and are reliant on self-reporting (Palamar et al., 2016). While respondent driven sampling designs (Handcock et al., 2014; Crawford et al., 2018) have been implemented to generate estimates specific to local areas for hard to reach populations, they are typically cross-sectional and would be logistically difficult to implement across larger areas of interest. Thus, there is a great need for local, longitudinal, population-level estimates that cover an entire area of administrative interest, like a state. In Ohio, the local spatial unit of interest is the county, as that most closely corresponds to the structure of local health districts which form the basis for the allocation of state resources. We lack direct information on the prevalence of PWMO in all of Ohio’s counties. However, we do have indirect information on this quantity in the form of surveillance data collected at the county level. Specifically, we have access to counts of overdose deaths and treatment admissions annually for each county. However, neither surveillance outcome is a perfect reflection of PWMO because there are unique selection processes associated with capturing an individual in either observed outcome that are likely heterogeneous across space and time. For example, overdose death rates are likely related to supply of fentanyl in the local drug market (Pardo et al., 2019). Likewise, treatment admissions are related to local access to care. This heterogeneity prevents us from simply assuming that there is a perfect correlation between the surveillance outcomes and underlying unobservable prevalence of PWMO. Instead, we must explicitly account for these selection processes. This is structurally similar to the problem faced in ecological applications of estimating population abundance with unmarked observations and unknown detection (i.e., selection) probabilities. Briefly, abundance models use hierarchical models to estimate the total number of individuals in a community when only a proportion can be observed at a given place and time (Royle, 2004; Kery and Royle, 2010). Conceptually, this is also similar to capture-recapture approaches, which have been applied to drug use and HIV in the past (Jones et al., 2016; Bao et al., 2015; Barocas et al., 2018; Min et al., 2020), but does not require identification and tagging of individuals. For our problem, the county-level surveillance outcomes each reflect a proportion of the true population of PWMO in the county whom we are able to observe or detect. Since we lack individual identifying information, we cannot use a capture-recapture approach and instead will consider an extension of abundance modeling to estimate the count and prevalence of PWMO. One documented challenge with abundance models is the ability to identify intercept parameters for both the latent abundance process and the detection process, which impedes accurate estimates of absolute abundance. Royle (2004) developed an N-mixture model to estimate population size from count data, but identifiability under this approach requires replicated observations of a closed, or unchanging, population. Sólymos et al. (2012) suggested replication is unnecessary provided the number of spatial locations is large and there is at least one continuous covariate that is uniquely related to each distinct process. However, Knape et al. (2015) found that estimates of absolute abundance from an N-mixture model without replication were sensitive to model assumptions on the detection probabilities. For our application, we lack replication. That is, we do not have repeated observations of a stable population to allow us to uniquely identify both the intercept in the model for the unobserved population in the community and the intercept in the model for the partial proportion of the population observed. Facing a similar problem, Stoner et al. (2019) used informative prior distributions to identify a hierarchical model for under-reporting using a single observed surveillance outcome. While this is a reasonable solution to the problem of non-identifiability when such prior information exists, results will be dependent on the choice of prior (Neath and Samaniego, 1997). Instead, we integrate multiple sources of data at the county and state levels to identify our model. This is the notion behind integrated population models, which are increasingly popular ecological models used to analyze population size. By jointly modeling multiple data sources that share some underlying latent process, these models can allow estimation of parameters which would be non-identifiable in a single outcome model (Besbeas et al., 2002; Schaub and Abadi, 2011). In our setting, the models for the county-level surveillance outcomes of overdose death and treatment counts both relate to the latent number of PWMO. Incorporating multiple county-level surveillance outcomes creates “pseudo-replication” because they show aggregate subsets of individuals who were detected, providing multiple partial views of the population of interest. We also incorporate state-level survey estimates of PWMO to inform the overall statewide prevalence, which, in turn, provides information about the range of the detection probabilities required for identification of the model (Knape et al., 2015). Thus, we use indirect count data on county-level surveillance outcomes and an integrated abundance model framework to inform county-level estimates of PWMO. This can also be viewed as an approach for using the county-level surveillance data to inform small area model-based estimates that downscale the state-level survey data to obtain county-level estimates on PWMO. In this paper, we develop an extension to the abundance modeling framework that integrates multiple aggregate data sources at different spatial supports to estimate county-level prevalence of PWMO. We define PWMO as those who have misused opioids in the past year and are at risk of experiencing overdose death or treatment admission for opioid misuse. In addition to prevalence and relative risks, we estimate county-level counts of PWMO, which may ultimately be more useful for per capita resource allocation. We will illustrate through simulation studies that our model accurately estimates the quantities of interest and is a drastic improvement over baseline approaches ignoring spatial heterogeneity. We will also illustrate the benefits of including multiple surveillance outcomes. We then apply our methodology to data from Ohio to estimate annual county-level prevalence of PWMO over a 13 year period. By doing so, we provide a coherent framework for integrating multiple sources of data to estimate an unobservable quantity of critical importance for public health policy and resource allocation. The rest of the paper is organized as follows. We describe the available data in Section 2. The modeling framework is detailed in Section 3. We present the design and results of our simulation study in Section 4. In Section 5, we describe the results for the application to the data from Ohio. We discuss the implications of our findings in Section 6. ## 2 Data Since we lack direct county-level data on PWMO, our primary data sources will be annual county-level counts of overdose deaths and treatment admissions for each of Ohio’s 88 counties from 2007-2019, the most recent available year. Overdose death data are publicly available from the Ohio Public Health Data Warehouse (Ohio Public Health Data Warehouse, 2020). Deaths are indexed to the county of residence of the decedent and are counted if the death certificate mentions poisoning from any opioid. Annual county-level treatment admission counts were obtained through a data use agreement with the Ohio Department of Mental Health and Addiction Services. Treatment admissions are indexed to the patient’s county of residence and capture any residential, intensive outpatient, or outpatient treatment for opioid misuse. Data were provided broken down into two age groups (adolescents and adults) but will only be considered in total for this study. State policy requires counts between 1 and 9 to be suppressed, causing some counties to have censored counts. Population data used for calculating rates are estimates from the National Center for Health Statistics and were also obtained from the Ohio Public Health Data Warehouse (Ohio Public Health Data Warehouse, 2020). Our model also incorporates state-level survey estimates of the prevalence of PWMO from the National Survey on Drug Use and Health (NSDUH) (SAMHSA, Center for Behavioral Health Statistics and Quality, 2003-2005, 2006-2008, 2009-2010, 2011-2014, 2015-2016, 2016-2017, 2017-2018, 2018-2019). We obtain state-level estimates for past year nonmedical opioid use for surveys prior to 2015 and past year opioid misuse after 2015. The language of the survey question was updated in 2015, but we assume the same underlying construct is addressed over time. The survey data for Ohio are estimates of multi-year averages of the statewide prevalence of misuse. The supplementary material contains the multi- year estimates of the statewide prevalence along with the standard errors. Note that the county-level data are from the years 2007 ($t=1$) to 2019 ($T=13$), but we incorporate survey information from 2003 ($t=-3$) to 2019 into the model. The survey estimates are shown in Supplemental Table 1. We utilize county-level estimates of sociodemographic characteristics from the American Community Survey (ACS) using the R package tidycensus. Variables obtained included poverty rate, unemployment rate, the proportion with at least a high school degree, and the proportion on food stamps. ACS estimates of the 5-year averages are available for all $n=88$ counties in Ohio from 2009 - 2019. ACS estimates for the individual years are available for $38$ of the $88$ counties. We also acquired publicly available data on opioid prescribing rates from the Ohio Automated RX Reporting System (OARRS), which was available from 2010-2019. In addition, we compiled county-level indicators of health professional shortage areas (HPSA) and medically underserved areas (MU) from the Health Resources and Services Administration, high intensity drug trafficking areas (HIDTA) from the Drug Enforcement Agency, and metropolitan statistical areas (MSA) from the United States Census Bureau. We also created an indicator of whether an interstate highway passed through each county to reflect transportation networks. ## 3 Model Let $Y_{it}^{(k)}$ be the count of PWMO who experience outcome $k=1,...,K$ in county $i=1,...,n$ during year $t=1,...,T$. Note that in our application, $K=2$ and $k=1,2$ refer to treatment admissions and overdose deaths, respectively. We also observe state-level survey information, denoted $S_{a:b}$, regarding the estimated statewide average prevalence of opioid misuse for the multi-year time period from year $a$ to year $b$ (inclusive). We are ultimately interested in estimating the latent number of PWMO in county $i$ during year $t$, $N_{it}$, and the relative risk of misuse, $\lambda_{it}$. As in Berliner (1996), we use a three-stage Bayesian hierarchical model to relate the observed data to the latent processes of interest. The data, process, and prior layers of the model are defined in the following subsections. ### 3.1 Data Model #### 3.1.1 County-level Surveillance Data We start by specifying a model for the observed county-level surveillance outcomes. Assume $Y_{it}^{(k)}|N_{it},p_{it}^{(k)}\stackrel{{\scriptstyle ind}}{{\sim}}\text{Binomial}\left(N_{it},p_{it}^{(k)}\right),$ (1) where $\text{logit}\left(p_{it}^{(k)}\right)=\mu_{t}^{(k)}+{\bf X}_{it}^{(k)}\mbox{\boldmath$\beta$}^{(k)}+f^{(k)}_{it}+\epsilon_{it}^{(k)}.$ In the logistic regression model for each outcome $k$, $\mu_{t}^{(k)}$ is a time varying intercept, ${\bf X}_{it}^{(k)}$ is a vector of centered covariates, $\mbox{\boldmath$\beta$}^{(k)}$ is a vector of regression coefficients, $f^{(k)}_{it}$ is a spatio-temporal random effect, and $\epsilon_{it}^{(k)}\overset{iid}{\sim}N(0,\sigma^{2}_{k})$ accounts for additional unexplained heterogeneity. We assume that each surveillance outcome is conditionally independent given the underlying true number of PWMO. That is, we assume marginal dependence between the counts within a county is attributable to the number of PWMO in that county. For our application, the design matrix for treatment rate ${\bf X}^{(1)}$ contains indicator variables that identify the counties that are classified as HPSA and MU. The design matrix for death rate ${\bf X}^{(2)}$ contains indicator variables for whether or not the county contains an interstate, whether the county is a HIDTA, and whether the county belongs to a MSA. As mentioned in Section 2, the age-group specific treatment counts are suppressed if they are between 1 and 9. However, we can incorporate the knowledge that suppressed counts are within that interval into our model by adapting the approach of Famoye and Wang (2004) for interval censoring. There are three cases of censoring that we encounter here, denoted by the indicator: $\displaystyle c_{it}=\begin{cases}0&\text{total count observed}\\\ 1&\text{adolescent count censored}\\\ 2&\text{adolescent and adult counts censored}.\end{cases}$ (2) For $c_{it}=0$, we observe both the adult and adolescent counts and so the total treatment count is their sum. For $c_{it}=1$, we observe the adult count, but the adolescent count is censored so we know that the total count must be between the adult count plus 1 and the adult count plus 9. Finally, for $c_{it}=2$, both counts are censored so we know the total count must be between 2 and 18. Let $Y^{(1)}_{it0}$ be the adolescent treatment count and $Y^{(1)}_{it1}$ be the adult treatment count. Then, we have $\displaystyle Y^{(1)}_{it}=\begin{cases}Y^{(1)}_{it0}+Y^{(1)}_{it1}&c_{it}=0\\\ Y^{(1)}_{it1}&c_{it}=1\\\ 0&c_{it}=2,\\\ \end{cases}$ (3) and the likelihood from Equation 1 becomes $\displaystyle L(\mathbf{p}^{(1)},\mathbf{p}^{(2)},\textbf{N}|\textbf{Y}^{(1)},\textbf{Y}^{(2)})=\left[\prod_{t=1}^{13}\prod_{i=1}^{88}f\left(Y_{it}^{(2)}|p_{it}^{(2)},N_{it}\right)\right]\times$ $\displaystyle\times\left[\prod_{t=1}^{13}\prod_{i=1}^{88}\left[f\left(Y_{it}^{(1)}|p_{it}^{(1)},N_{it}\right)\right]^{I(c_{it}=0)}\left[F\left(Y_{it}^{(1)}+9|p_{it}^{(1)},N_{it}\right)-F\left(Y_{it}^{(1)}|p_{it}^{(1)},N_{it}\right)\right]^{I(c_{it}=1)}\right.$ (4) $\displaystyle\hskip 14.45377pt\times\left.\left[F\left(18|p^{(1)}_{it},N_{it}\right)-F\left(1|p^{(1)}_{it},N_{it}\right)\right]^{I(c_{it}=2)}\right]$ where $F\left(\cdot|p^{(1)}_{it},N_{it}\right)$ is the cumulative distribution function and $f\left(\cdot|p^{(1)}_{it},N_{it}\right)$ is the probability mass function of the binomial distribution with population size $N_{it}$ and rate $p^{(1)}_{it}$, and $f\left(\cdot|p^{(2)}_{it},N_{it}\right)$ is the probability mass function of the binomial distribution with population size $N_{it}$ and rate $p^{(2)}_{it}$. #### 3.1.2 State-level Survey Data For the survey information regarding overall statewide prevalence of misuse between years $a$ and $b$, given by $S_{a:b}$, we assume a normal distribution truncated to $(0,1)$. The standard error $\hat{se}_{a:b}$ is estimated from the survey and assumed to be known. We assume a truncated normal distribution to enable the direct incorporation of the mean and standard error reported from the survey. In addition, we let $\mu_{t}$ denote the true latent statewide prevalence of misuse in year $t$ and assume this is linear over time such that $\mu_{t}=\beta_{0}^{\mu}+\beta_{1}^{\mu}t$. Note that this formulation requires survey data to be observed for at least two time periods. More specifically, we assume $\displaystyle S_{a:b}|\beta_{0}^{\mu},\beta_{1}^{\mu}\sim N_{(0,1)}\left(\frac{1}{b-a+1}\sum_{t=a}^{b}\mu_{t},\hat{se}_{a:b}^{2}\right),$ (5) so that the mean of $S_{a:b}$ is the mean of the true statewide prevalence during the time period from a to b. The linearity assumption for $\mu_{t}$ implies the mean function is $\frac{1}{b-a+1}\sum_{t=a}^{b}\mu_{t}=\beta_{0}^{\mu}+\beta_{1}^{\mu}\frac{1}{b-a+1}\sum_{t=a}^{b}t=\beta_{0}^{\mu}+\beta_{1}^{\mu}\frac{b^{2}+b-a^{2}+a}{2(b-a+1)}.$ We assume survey estimates are independent of one another, conditional on the true statewide rate of misuse. ### 3.2 Process Model We are primarily interested in estimating the latent number of PWMO, $N_{it}$, and the relative risk of misuse, $\lambda_{it}$. We assume a non-canonical, spatial rates-like parameterization (Cressie et al., 2005; Cressie and Wikle, 2011): $\displaystyle N_{it}|\lambda_{it}\stackrel{{\scriptstyle ind}}{{\sim}}\text{Binomial}(P_{it},\mu_{t}\lambda_{it}),$ where $P_{it}$ is the known population of county $i$ during year $t$. Let $\lambda_{it}$ represent the relative risk of misuse in county $i$ during year $t$ compared to a statewide average prevalence, $\mu_{t}$, subject to the constraint $0<\mu_{t}\lambda_{it}<1$. Recall, $\mu_{t}$ is assumed to be linear across time and is informed by survey data, as described in Section 3.1.2. We use a non-standard parameterization because the survey data reflects a state average which is not equivalent to the intercept of a standard logistic regression. Note that we could also consider a Poisson model for the latent counts. The large population size and small rates of misuse imply these models will yield near identical results. We chose a binomial model to explicitly allow for the known population size, $P_{it}$. Assume $\log(\lambda_{it})=\mathbf{W}_{it}\boldsymbol{\gamma}+u_{it}+v_{it}$, where $\mathbf{W}_{t}$ is a design matrix for time $t$ containing centered covariates without an intercept, $u_{it}$ is a spatio-temporal random effect, and $v_{it}\overset{iid}{\sim}N(0,\sigma^{2}_{v})$. For the spatio-temporal random effect $u_{it}$, we assume an intrinsic conditional autoregressive (ICAR) model with an autoregressive of order one (AR(1)) temporal trend (Besag, 1974). More specifically, this model assumes for $t=1$ $u_{it}|\mathbf{u}_{-i,t},\tau^{2}_{u}\sim N\left(\frac{1}{w_{i+}}\sum_{j}w_{ij}u_{jt},\frac{\tau^{2}_{u}}{w_{i+}}\right),$ (6) and for $t=2,...,T$ $u_{it}|\mathbf{u}_{-i,t},u_{i,t-1},\tau^{2}_{u},\phi_{u}\sim N\left(\phi_{u}u_{i,t-1}+\frac{1}{w_{i+}}\sum_{j}w_{ij}(u_{jt}-\phi_{u}u_{j,t-1}),\frac{\tau^{2}_{u}}{w_{i+}}\right),$ (7) where $\mathbf{u}_{-i,t}=\\{u_{jt}:j\neq i\\}$, $w_{ij}$ is an indicator that counties $i$ and $j$ share a border, and $w_{i+}=\sum_{j}w_{ij}$ is the total number of neighbors for county $i$. Let $\mathbf{u}_{t}=\left(u_{1t},...,u_{nt}\right)^{\prime}$ denote the vector of random effects during year $t$. The intrinsic models specified by equations (6) and (7) yield joint distributions with probability density functions $\displaystyle\left[\mathbf{u}_{1}|\tau^{2}_{u}\right]$ $\displaystyle\propto\exp\left(-\frac{1}{2\tau^{2}_{u}}\mathbf{u}^{\prime}_{1}(\mathbf{H}-\mathbf{A})\mathbf{u}_{1}\right)\text{ for }t=1$ $\displaystyle\left[\mathbf{u}_{t}|\mathbf{u}_{t-1},\tau^{2}_{u},\phi_{u}\right]$ $\displaystyle\propto\exp\left(-\frac{1}{2\tau^{2}_{u}}\left(\mathbf{u}_{t}-\phi_{u}\mathbf{u}_{t-1}\right)^{\prime}(\mathbf{H}-\mathbf{A})\left(\mathbf{u}_{t}-\phi_{u}\mathbf{u}_{t-1}\right)\right)\text{ for }t=2,...,T,$ where $\mathbf{A}$ is the adjacency matrix whose $(i,j)$th element is $w_{ij}$ and $\mathbf{H}$ is a diagonal matrix with $(i,i)$th element $w_{i+}$. The above are not valid probability densities since the precision $\mathbf{H}-\mathbf{A}$ is less than full rank. However, the ICAR model is a valid process level model provided a centering constraint, $\sum_{i}u_{it}=0$ for all $t,k$, is enforced (Banerjee et al., 2004). In this application, we chose to include standardized county-level covariate information on poverty rate, unemployment rate, the percentage with at least a high school degree, the percentage on food stamps, and the opioid prescribing rate per capita as covariates for relative risk of misuse. The prescribing rate data from OARRS is only available from 2010 to 2019. Thus, this variable is only included in $\mathbf{W}_{t}$ for $t=4,...,T$. As discussed in Section 1, it is well known that the single visit abundance model suffers from non-identifiability of intercept parameters. To see this more clearly, one can show that in the single outcome ($K=1$) case, integrating out the latent count $N_{it}$ yields the marginal distribution $Y_{it}^{(k)}|p_{it}^{(k)},\mu_{t},\lambda_{it}\sim\text{Binomial}(P_{it},\mu_{t}\lambda_{it}p_{it})$. The product form for the rate implies these quantities, and in particular the intercept parameters $\mu_{t}$ and $\mu_{t}^{(k)}$, cannot be individually identified without additional information. However, our model’s integration of multiple data sources eliminates this issue. In particular, the survey data directly informs $\mu_{t}$. Also note that by jointly modeling $K>1$ outcomes, the joint marginal distribution of the outcomes after summing over the possible values of $N_{it}$ does not take the same simple marginal form that results with the single outcome setting. By proposing an integrated model that jointly models multiple data sources, we are able to identify the primary quantities of interest, $N_{it}$ and $\lambda_{it}$. Achieving identifiability by integrating multiple data sources is discussed in Besbeas et al. (2002) and Section 9.2 of Cole (2020). The models for the surveillance outcomes depend on spatio-temporal random effects $f_{it}^{(k)}$. These are specified similarly to the spatio-temporal random effects in the process-level model for the relative risk of misuse. In particular, for each outcome $k$, we assume an ICAR model with an AR(1) temporal trend analogous to (6)-(7) with conditional variance parameter $\tau^{2}_{k}$ and temporal autocorrelation parameter $\phi_{k}$. We assume these random effects are independent across the $k$ outcomes. In more traditional models, we would expect these rates to be correlated within a county because they share a common underlying process, the latent number of PWMO in that location. However, we explicitly account for that process as the outcome models are specified conditional on the latent number of PWMO so that these random effects capture residual spatial variability in the outcome- specific rates of opioid overdose death and treatment admissions. We a priori do not believe these conditional rates would follow similar spatial trends. If instead we were modeling outcomes that were believed to be correlated conditional on $N_{it}$, this model can easily be generalized to a multivariate conditional autoregressive model. ### 3.3 ACS Covariate Model Our process model relates the relative risk of misuse, $\lambda_{it}$, to centered covariates from the ACS in the design matrix ${\bf{W}}_{t}$. However, only 5-year average estimates are available for all 88 counties, which leads to temporal misalignment with the annual estimates of interest. To account for this and the additional uncertainty from using multi-year average estimates, we add an additional layer to the data and process stages of the model, similar to the work of Bradley et al. (2015), to account for uncertainty in the latent yearly county-level values of these variables. This additional layer leverages the annual estimates that are available in 38 counties to inform the latent annual value in counties where only 5 year average estimates are available. Let $\hat{\omega}_{it}^{(5)}$ and $\hat{\omega}_{it}^{(1)}$ denote the $5$-year and $1$-year estimates from the ACS for one of the variables of interest with standard errors $\hat{\sigma}_{(5)}$ and $\hat{\sigma}_{(1)}$, respectively. Let $\omega_{it}$ denote the true latent value of this variable in county $i$ during year $t$. The columns of the design matrix $\mathbf{W}_{t}$ corresponding to the ACS variables contain standardized variables and are thus functions of these latent $\omega_{it}$. For $t=3,...,T$ and each of the ACS variables included, we assume $\displaystyle\hat{\omega}_{it}^{(5)}$ $\displaystyle\sim N_{(0,100)}\left(\frac{1}{5}\sum_{\ell=t-4}^{t}\omega_{i\ell},\hat{\sigma}_{(5)}^{2}\right)$ (8) $\displaystyle\hat{\omega}_{it}^{(1)}$ $\displaystyle\sim N_{(0,100)}\left(\omega_{it},\hat{\sigma}_{(1)}^{2}\right),$ where $N_{(0,100)}$ denotes the normal distribution truncated to $(0,100)$ since our variables of interest are recorded as percentages. Observe that even though we only include ACS estimates from 2009 - 2019, we infer latent values of the yearly variables from 2005 - 2019. In the process level of the model, we assume the latent yearly percentages follow $\omega_{it}\overset{ind}{\sim}N_{(0,100)}\left(\omega_{t},\tau_{i}^{2}\right)$, where $\omega_{t}$ denotes a statewide average for that variable in year $t$. This community-level process model for the latent variables permits some borrowing of strength across the counties, improving estimation in the counties that only have $5$-year estimates available. We note that a more complicated spatio-temporal structure could be considered here as was done in Bradley et al. (2015), but it comes with additional computational expense. ### 3.4 Prior Model and Posterior Distribution All intercepts and regression coefficients, $\mu_{t}^{(k)},\mbox{\boldmath$\beta$}^{(k)},\beta_{0}^{\mu},\beta_{1}^{\mu},$ and $\boldsymbol{\gamma}$, are independently assigned flat, uniform prior distributions on the real line. The statewide average yearly percentages for the ACS variables, $\omega_{t}$ are assigned a uniform prior distribution over $(0,100)$. All variance parameters $\sigma^{2}_{k},\tau^{2}_{k},\tau^{2}_{u},$ $\sigma^{2}_{v}$, and $\\{\tau^{2}_{i}\\}$ are assumed to have inverse gamma prior distributions with shape and scale parameters of 0.5. The temporal autoregressive parameters $\phi_{k},\phi_{u}$ are assumed to be uniform over $(0,1)$. The posterior distribution of the latent processes and parameters is simulated using an adaptive Metropolis-within-Gibbs Markov chain Monte Carlo (MCMC) algorithm implemented using the R package NIMBLE (de Valpine et al., 2017). To improve the efficiency associated with sampling highly correlated variables, for each county $i$, the latent counts $N_{i1},...,N_{iT}$ are updated jointly using an automated factor slice sampler as in Stoner et al. (2019). NIMBLE enforces the zero mean constraint in the ICAR models with a commonly used approach of updating these variates without the constraint and then centering (Paciorek, 2009). Convergence of the Markov chain was assessed by visually inspecting trace plots. The R code is included in the supplementary material. ## 4 Simulation Study We performed a simulation study to assess the proposed model’s ability to accurately predict latent counts of misuse, $N_{it}$, and relative risk $\lambda_{it}$. We fixed values of all hyperparameters and simulated $M=100$ latent processes and data sets from the proposed model where each data set consisted of $T=10$ years of data for each of $n=100$ counties on a $10\times 10$ grid for $K=2$ county level outcomes and also yearly survey estimates of statewide prevalence. This mimics the setting for the Ohio data we consider in Section 5 where the county-level outcomes correspond to treatment admission counts ($k=1$) and death counts ($k=2$). Note that for the purpose of the simulation, we assume the survey estimates are yearly and not multi-year averages, and we assume covariates used in the design matrix $\mathbf{W}$ are known so no process-level model is needed for these values. Specific details regarding how the data were simulated are in the Supplementary Material. For each of the $M=100$ simulated data sets, we fit the proposed model under two scenarios: (1) assuming we observed yearly survey information and (2) assuming we only observed survey information in years $2,5,8$, which we will refer to as the sparse survey information scenario. These cases reflect a slightly better and slightly worse case than the actual multi-year average survey data that is available for the application. We will compare the results of our proposed joint model to a baseline model that assumes the state-level survey estimate applies to all counties (Rembert et al., 2017; Burke and Buchanich, 2018) (i.e., no spatial heterogeneity such that $\hat{N}_{it}=\hat{\mu}_{t}P_{it}$) and to a model using only a single county- level outcome (e.g. death counts) (Stoner et al., 2019). We used the R package NIMBLE to run a MCMC algorithm for each of the $M$ data sets under each scenario. To assess performance, we used the posterior mean as our estimate of latent misuse, $\hat{N}_{it}$, for each simulated data set and computed 95% equal- tail credible intervals. We used several different criterion to assess how well each approach estimated the true latent counts $N_{it}$ and relative risk $\lambda_{it}$. We computed the coverage probabilities (CP) of the credible intervals for the latent counts, the root mean squared error (RMSE) of the counts and of the relative risk, as well as the relative median absolute error (MAE) for the counts. ### 4.1 Simulation Results Table 1: A comparison of results from the proposed joint model to the baseline model. The first two columns show the mean and median coverage probability (CP) across the 100 simulated data sets for $N_{it}$ (proposed/baseline). The final three columns present the proportion of simulated data sets for which the proposed model results in a smaller error along with the average error across the 100 data sets for each model (proposed/baseline). Survey Mean CP Median CP RMSE ($N_{it}$) RMSE ($\lambda_{it}$) Relative MAE ($N_{it}$) Yearly 95% 96% 100/100 100/100 100/100 2598/5581 0.29/0.64 0.16/0.32 Sparse 93% 95% 100/100 100/100 100/100 2890/5647 0.29/0.64 0.17/0.33 Figure 1: Boxplots of the RMSE for the counts ($N_{it}$) (left) and relative risk ($\lambda_{it}$) (middle) and the relative MAE for the counts ($N_{it}$) (right) for the 100 simulated data sets under the proposed joint model assuming yearly survey data, the proposed joint model assuming sparse survey data, and the baseline estimates based on yearly survey data. The main findings of the simulation study are summarized here with additional results in the supplementary material. Briefly, the error rates of the proposed model were roughly half that of the baseline model for both the yearly and sparse survey scenarios, with these quantities smaller for the yearly survey information setting compared to the sparse survey setting (Table 1 and Figure 1). The average coverage probabilities of the credible intervals were very close to the target value of 95%. We also compared the results of our proposed joint model to the model that only considers a single county- level outcome ($Y_{it}^{(2)}$) in addition to the survey data for both the yearly and sparse survey data cases. The proposed joint model yields a coverage probability that is closer to 95% than would be obtained modeling just a single county-level outcome. We also see that the proposed joint model yields smaller errors for almost all of the 100 simulated data sets, with the average error reduced by at least 20% (Table 2 and Figure 2). In addition to illustrating that the proposed model has smaller error than competing approaches, we also show that, on average, the proposed model recovers the true parameters of the data generating model. In Supplemental Figure 1, we show that we were able to estimate the intercept parameters $\mu_{t}$ ($\beta^{\mu}_{0}$ and $\beta^{\mu}_{1}$), $\mu_{t}^{(1)}$, and $\mu_{t}^{(2)}$. This suggests that we have adequate information on the range of the detection probabilities to overcome the identifiability issues that are common with abundance models. In addition, we show in Supplemental Figure 2 that we are able to estimate the relative risks, $\lambda_{it}$, quite well. Across the 100 simulated data sets, the average difference between the true and estimated $\lambda_{it}$ is -0.003. Likewise, we show scatterplots of the estimates against the true values for three randomly selected simulated data sets in Supplement Figure 3 and see points clustered around the $y=x$ line, which indicates that the model is generally recovering the truth. Again, this suggests that we are able to recover the true values of the model parameters, suggesting that we have adequate information to inform the model and practically overcome challenges related to identifiability. In summary, our simulation study showed that our model accurately recovers the latent counts and relative risks and drastically outperforms the baseline model. We have also shown that our model with sparse survey information still outperforms the baseline model, although it is not as good as having yearly survey data. We observe that performance is improved by including multiple observed surveillance outcomes compared to using only a single outcome. Finally, we illustrated that the integration of state-level survey information overcomes limitations of traditional abundance models and adequately informs the estimation of model parameters. Table 2: A comparison of results from the proposed joint model to the single outcome model. The first two columns show the mean and median coverage probability (CP) across the 100 simulated data sets for $N_{it}$ (proposed/single outcome). The final three columns present the proportion of simulated data sets for which the proposed model results in a smaller error along with the average error across the 100 data sets for each model (proposed/single outcome). Survey Mean CP Median CP RMSE ($N_{it}$) RMSE ($\lambda_{it}$) Relative MAE ($N_{it}$) Yearly 95%/89% 96%/90% 99/100 100/100 100/100 2598/3559 0.39/0.64 0.16/0.21 Sparse 93%/89% 95%/90% 93/100 94/100 91/100 2890/3665 0.39/0.64 0.17/0.21 Figure 2: Boxplots of the RMSE for the counts ($N_{it}$) (left) and relative risk ($\lambda_{it}$) (middle) and the relative MAE for the counts ($N_{it}$) (right) for the 100 simulated data sets under the proposed joint model and the single outcome model for cases assuming yearly and sparse survey data. ## 5 Ohio Prevalence Estimates In this section, we apply the model defined in Section 2 to estimate the unobserved number of PWMO from 2007-2019 for each of Ohio’s $n=88$ counties. To do so, we utilize $T=13$ years of observed county-level counts of opioid overdose death and treatment admissions and multi-year state-level survey estimates of the prevalence of opioid misuse. Figure 3: Maps of the estimated prevalence of PWMO, given by $\hat{N}_{it}/P_{it}$. Counties outlined in yellow have 95% credible intervals that are entirely above the baseline estimate, and the counties in blue have credible intervals that are entirely below. Figure 4: Maps of the standard error of the prevalence of PWMO. Figure 3 maps the posterior mean of the estimate prevalence, $\hat{N}_{it}/P_{it}$, and Figure 4 contains the posterior standard deviation. Maps for the estimated log relative risk, $\log\left(\hat{\lambda}_{it}\right)$, are in the supplementary material. We observe county-level and yearly heterogeneity with prevalence ranging from approximately 0 to nearly 0.13. We observe the highest prevalence in southern Ohio, which is commonly considered the epicenter of the opioid epidemic in Ohio. This map also identifies the counties whose estimated counts of the PWMO are significantly different than would be obtained under the naive baseline estimate that assumes a homogeneous statewide rate, estimated from the survey data. More specifically, the counties outlined in yellow have 95% credible intervals (CI) that are entirely above the baseline estimate, and the counties in blue have CIs that are entirely below. The statewide average prevalence of misuse parameters were estimated to be $\hat{\beta}_{0}^{\mu}=0.0535$ with 95% CI (0.0516 to 0.0557) and $\hat{\beta}_{1}^{\mu}=-0.0006$ with 95% CI (-0.0009 to -0.0004). Table 3 contains the corresponding information for the prevalence of misuse regression coefficients. We observe a 5% increase in prevalence per standard deviation increase in unemployment and high school education, a roughly 10% increase per standard deviation increase in poverty and food stamps, and a 24% increase per standard deviation increase in prescribing rate. Table 3: Posterior mean and 95% credible intervals of prevalence ratios per 1 standard deviation change in each covariate for prevalence of PWMO. Variable Estimate 95% CI Poverty 1.10 (1.08, 1.13) Unemployment 1.05 (1.01, 1.08) High School 1.05 (1.02, 1.08) Food Stamps 1.11 (1.08, 1.15) Prescribing Rate 1.24 (1.19, 1.29) Figure 5 is a map of the estimated death rate among PWMO, $\hat{p}^{(D)}_{it}$. A map of the standard errors is in the supplementary material. In the earlier years, we see very low death rates with a large degree of spatial heterogeneity. Starting around 2012, we see increases in the death rate for the southwestern Ohio region corresponding to the Cincinnati, Ohio area, followed by increasing rates in the northeastern Cleveland, Ohio region. This is likely due to the influx of fentanyl in these regions during this time period (Daniulaityte et al., 2017; Pardo et al., 2019). Table 4 shows the posterior means and 95% credible intervals for the odds ratios for the death rate. We estimate that the odds of death are 19% higher in metropolitan statistical areas compared to non-metropolitan areas. Figure 5: Maps of the estimated death rate among PWMO, $\hat{p}^{(D)}_{it}$. Table 4: Posterior means and 95% credible intervals for the odds ratios corresponding to the covariates in the models for death rate and treatment rate among PWMO. Variable Estimate CI Death Rate Interstates 1.04 (0.96, 1.14) HIDTA 1.07 (0.97, 1.17) MSA 1.19 (1.08, 1.30) Treatment Rate HPSA 0.91 (0.85, 0.98) MU 1.01 (0.93, 1.09) Similarly, Figure 6 maps the estimated treatment rates among PWMO, $\hat{p}^{(T)}_{it}$. A map of the standard errors is in the supplementary material. We generally see an increasing treatment rate over time, with the largest rates in southern Ohio which is known to have received a large number of resources towards treatment of opioid misuse (Governor’s Cabinet Opiate Action Team, 2012). Table 4 shows the posterior means and 95% credible intervals for the treatment rate odds ratios. We estimate that the odds of treatment are 9% less in health professional shortage areas compared to non- shortage areas. Figure 6: Maps of the estimated treatment rates $\hat{p}^{(T)}_{it}$ for each county from 2007 to 2018. Figure 7 plots the time-varying intercepts for death and treatment rates, $\mu_{t}^{(D)}$ and $\mu_{t}^{(T)}$. Recall the covariates included for both logistic regression models are indicator variables, so the intercepts for death are interpreted as the yearly average death rates (on the logit scale) among counties without interstates that are neither high impact drug trafficking areas nor metropolitan statistical areas. We generally observe an increasing trend, with a sharp increase beginning in 2012. The years following 2012 correspond to the time in which fentanyl began to infiltrate the state, resulting in a drastic increase in overdose deaths (Pardo et al., 2019). The estimates of $\mu_{t}^{(T)}$ represent the yearly average treatment rates (on the logit scale) among counties that are neither health professional shortage areas nor medically underserved areas. We see a generally increasing trend with slight shifts observed in 2010 and also in 2014. We note that these time periods align with the passing of state legislation expanding access to treatment (Governor’s Cabinet Opiate Action Team, 2012) and also with Medicaid expansion which occurred in Ohio in 2014. Figure 7: Plot of the time-varying intercepts for treatment and death ## 6 Discussion In this paper, we developed an approach for estimating the county-level prevalence of PWMO using indirect information from observed, aggregate surveillance outcomes synthesized within an abundance model framework. By integrating state-level external information, we showed that our model can identify the intercepts at both the level of the latent counts and for the observed surveillance data. We also show that this approach is superior to assuming homogeneity across counties and to using a single surveillance outcome. By coherently leveraging joint information across data sources, we were able to recover model-based estimates of the latent counts of interest. We applied our framework to estimate annual county-level counts and prevalence of PWMO in Ohio over a 13 year period. By doing so, we were able to estimate the prevalence of PWMO, which is extremely relevant for public health policy and until this point, estimates did not exist at the county-level across the state. In addition, we estimated death and treatment rates within the population of PWMO. This is in contrast to typical epidemiological analyses that define the population at risk as the whole population rather than just the PWMO. Therefore, the estimates of these rates are more relevant for describing trends in PWMO and informing the targeting of resources and harm reduction interventions. These estimates can also be used to fill data gaps by informing key model parameters in simulation models developed to inform policy choices (Jalali et al., 2020). We also described associations with covariates at each level of the model. Our work forms a foundation for this line of research as there are additional methodological challenges to address. For instance, the primary goal of our simulation study was to establish the model’s ability to accurately estimate the latent counts and prevalence of PWMO. We have not thoroughly assessed the conditions required to accurately estimate covariate effects. This issue of identifying covariate effects for both observed and latent processes has been studied under various settings (e.g. Lele et al. (2012); Hepler et al. (2018); Stoner et al. (2019)), but the results of those papers may not hold since we integrate multiple observed variables at the desired spatial support with additional information at the state level. A future research question is to investigate how well and under what conditions our model can estimate covariate effects. Additional avenues for future research include studying advantages and disadvantages of including more outcome variables and also utilizing our model to evaluate policy and optimize resource allocation. Our analysis has several limitations. First, we assume the surveillance outcomes are observed without error, but there is potential for misclassification, particularly of overdose deaths (Slavova et al., 2015). However, Ohio is considered to have excellent reporting of overdose deaths (Scholl et al., 2019). We also use survey estimates to inform the intercepts which are potentially underreported. While the model can flexibly adjust estimates around the parameters informed by the survey, future work will formally explore sensitivity to bias in the survey estimates. In addition, the language of the survey question addressing opioid misuse was changed in 2015 which may have impacted responses. We also assume that all individuals counted as a treatment admission or an overdose death belong to the population of PWMO. While this is a reasonable assumption, it is unlikely to be universally true, particularly as fentanyl is unknowingly added to other substances (Mars et al., 2019; Townsend et al., 2021). In addition, all analyses are aggregate and should be interpreted at the appropriate level to avoid the ecological fallacy (Pianntadosi et al., 1988). In conclusion, we have a developed a model within the abundance model framework to estimate the size of hidden populations using observed data that provide indirect information. Through synthesis of multiple sources of data, we are able to generate model-based estimates of hidden quantities that are critical for informing public health policy and the allocation of resources. We believe this is a promising framework for addressing questions about hidden epidemiological populations and can provide a foundation for future research. ## Acknowledgements Research reported in this publication was supported by the National Institute On Drug Abuse of the National Institutes of Health under Award Number R21DA045236 and the National Institute of Child Health and Human Development under Award Number R01HD092580. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. These data were provided by the Ohio Department of Health. The Department specifically disclaims responsibility for any analyses, interpretations or conclusions. ## References * Banerjee et al. (2004) Banerjee, S., Carlin, B. P. and Gelfand, A. E. (2004) Hierarchical modeling and analysis for spatial data. Boca Raton, Fla.: Chapman & Hall/CRC. * Bao et al. (2015) Bao, L., Raftery, A. and Reddy, A. (2015) Estimating the sizes of populations at risk of hiv infection from multiple data sources using a bayesian hierarchical model. Statistics and Its Interface, 8, 125–136. * Barocas et al. (2018) Barocas, J. A., White, L. F., Wang, J., Walley, A. Y., LaRochelle, M. R., Bernson, D., Land, T., Morgan, J. R., Samet, J. H. and Linas, B. P. (2018) Estimated prevalence of opioid use disorder in Massachusetts, 2011–2015: A capture–recapture analysis. American Journal of Public Health, 108, 1675–1681. PMID: 30359112. * Berliner (1996) Berliner, L. (1996) Hierarchical Bayesian time series models. In Maximum Entropy and Bayesian Methods (eds. K. Hanson and R. Silver), 15–22. Dordrecht: Kluwer Academic Publishers. * Besag (1974) Besag, J. (1974) Spatial interaction and the statistical analysis of lattice systems. Journal of the Royal Statistical Society. Series B (Methodological), 36, 192–236. * Besbeas et al. (2002) Besbeas, P., Freeman, S., Mordan, B. and Catchpole, E. (2002) Integrating mark-recapture-recovert and census data to estimate animal abundance and demographic parameters. Biometrics, 58, 540–547. * Bradley et al. (2015) Bradley, J. R., Wikle, C. K. and Holan, S. H. (2015) Spatio-temporal change of support with application to American Community Survey multi-year period estimates. Stat, 4, 255–270. * Burke and Buchanich (2018) Burke, D. and Buchanich, J. (2018) Drug overdoses in pennsylvania: Measuring, tracking, and forecasting the epidemic. Commonwealth: A Journal of Pennsylvania Politics and Policy, 20, 23–37. * CDC (2021) CDC, N. C. f. H. S. (2021) Drug overdose deaths in the U.S. top 100,000 annually. URL: https://www.cdc.gov/nchs/pressroom/nchs_press_releases/2021/20211117.htm. * Cole (2020) Cole, D. J. (2020) Parameter Redundancy and Identifiability. Boca Raton: CRC Press. * Crawford et al. (2018) Crawford, F. W., Wu, J. and Heimer, R. (2018) Hidden population size estimation from respondent-driven sampling: A network approach. Journal of the American Statistical Association, 113, 755–766. * Cressie et al. (2005) Cressie, N., Perrin, O. and Thomas-Agnan, C. (2005) Likelihood-based estimation for Gaussian mrfs. Statistical Methodology, 2, 1–16. * Cressie and Wikle (2011) Cressie, N. A. C. and Wikle, C. K. (2011) Statistics for spatio-temporal data. Hoboken, N.J.: Wiley. * Daniulaityte et al. (2017) Daniulaityte, R., Juhascik, M. P., Strayer, K. E., Sizemore, I. E., Harshbarger, K. E., Antonides, H. M. and Carlson, R. R. (2017) Overdose deaths related to fentanyl and its analogs - Ohio, January-February 2017\. MMWR. Morbidity and mortality weekly report, 66, 904–908. * de Valpine et al. (2017) de Valpine, P., Turek, D., Paciorek, C., Anderson-Bergman, C., Temple Lang, D. and Bodik, R. (2017) Programming with models: writing statistical algorithms for general model structures with NIMBLE. Journal of Computational and Graphical Statistics, 26, 403–417. * Division of Unintentional Injury Prevention (2021) Division of Unintentional Injury Prevention (2021) Drug overdose deaths. Internet. https://www.cdc.gov/drugoverdose/deaths/index.html. * Drug Enforcement Agency (2016) Drug Enforcement Agency (2016) 2016 national drug threat assessment. Internet. https://www.dea.gov/resource-center/2016\%20NDTA\%20Summary.pdf. * Famoye and Wang (2004) Famoye, F. and Wang, W. (2004) Censored generalized poisson regression model. Computational Statistics and Data Analysis, 46, 547–560. * Governor’s Cabinet Opiate Action Team (2012) Governor’s Cabinet Opiate Action Team (2012) Attacking Ohio’s opiate epidemic. Online; accessed 6-September-2017. https://mha.ohio.gov/Researchers-and-Media/Combating-the-Opioid-Crisis. * Handcock et al. (2014) Handcock, M. S., Gile, K. J. and Mar, C. M. (2014) Estimating hidden population size using respondent-driven sampling data. Electronic Journal of Statistics, 8, 1491–1521. * Hepler et al. (2018) Hepler, S., Erhardt, R. and Anderson, T. (2018) Identifying drivers of spatial variation in occupancy with limited replication camera trap data. Ecology, 99, 2152–2158. * Jalali et al. (2020) Jalali, M. S., Ewing, E., Bannister, C. B., Glos, L., Eggers, S., Lim, T. Y., Stringfellow, E., Stafford, C. A., Pacula, R. L., Jalal, H. and Kazemi-Tabriz, R. (2020) Data needs in opioid systems modeling: Challenges and future directions. American Journal of Preventive Medicine. URL: http://www.sciencedirect.com/science/article/pii/S0749379720303998. * Jones et al. (2016) Jones, H. E., Welton, N. J., Ades, A. E., Pierce, M. and Davies, W. (2016) Problem drug use prevalence estimation revisited: heterogeneity in capture-recapture and the role of external evidence. Addiction, 111, 438–447. * Kery and Royle (2010) Kery, M. and Royle, A. (2010) Hierarchical modelling and estimation of abundance and population trends in metapopulation designs. Journal of Animal Ecology, 79, 453–461. * Knape et al. (2015) Knape, J., Korner-Nievergelt, F. and Yoccoz, N. (2015) Estimates from non-replicated population surveys rely on critical assumptions. Methods in Ecology and Evolution, 6, 298–306. * Lele et al. (2012) Lele, S., Moreno, M. and Bayne, E. (2012) Dealing with detection error in site occupancy surveys: what can we do with a single survey? Journal of Plant Ecology, 5, 22–31. * Lerner and Fauci (2019) Lerner, A. M. and Fauci, A. S. (2019) Opioid Injection in Rural Areas of the United States: A Potential Obstacle to Ending the HIV Epidemic. JAMA, 322, 1041–1042. * Mars et al. (2019) Mars, S. G., Rosenblum, D. and Ciccarone, D. (2019) Illicit fentanyls in the opioid street market: desired or imposed? Addiction, 114, 774–780. * Min et al. (2020) Min, J. E., Pearce, L. A., Homayra, F., Dale, L. M., Barocas, J. A., Irvine, M. A., Slaunwhite, A. K., McGowan, G., Torban, M. and Nosyk, B. (2020) Estimates of opioid use disorder prevalence from a regression-based multi-sample stratified capture-recapture analysis. Drug and Alcohol Dependence, 217, 108337. * Neath and Samaniego (1997) Neath, A. and Samaniego, F. (1997) On the efficacy of Bayesian inference for nonidentifiable models. The American Statistician, 51, 225–232. * Office of National Drug Control Policy Executive, Office of the President of the United States (2011) Office of National Drug Control Policy Executive, Office of the President of the United States (2011) Epidemic: responding to America’s prescription drug abuse crisis. Internet. https://www.hsdl.org/?view&did=4609. * Ohio Public Health Data Warehouse (2020) Ohio Public Health Data Warehouse (2020) Ohio resident mortality data. http://publicapps.odh.ohio.gov/EDW/DataCatalog. Accessed February 2, 2020. * Paciorek (2009) Paciorek, C. (2009) Technical vignette 5: Understanding intrinsic Gaussian Markov random field spatial models, including intrinsic conditional autoregressive models. Tech. rep., Department of Statistics, University of California, Berkeley and Department of Biostatistics, Harvard School of Public Health. * Palamar et al. (2016) Palamar, J. J., Shearston, J. A. and Cleland, C. M. (2016) Discordant reporting of nonmedical opioid use in a nationally representative sample of US high school seniors. The American Journal of Drug and Alcohol Abuse, 42, 530–538. * Pardo et al. (2019) Pardo, B., Taylor, J., Caulkins, J. P., Kilmer, B., Reuter, P. and Stein, B. D. (2019) The Future of Fentanyl and Other Synthetic Opioids. Santa Monica, CA: RAND Corporation. * Pianntadosi et al. (1988) Pianntadosi, S., Byar, D. P. and Green, S. B. (1988) The ecological fallacy. American Journal of Epidemiology, 127, 893–904. * Rembert et al. (2017) Rembert, M., Betz, M., Feng, B. and Partridge, M. (2017) Taking measure of Ohio’s opioid crisis. https://aede.osu.edu/sites/aede/files/publication_files/Swank%20-%20Taking%20Measure%20of%20Ohios%20Opioid%20Crisis.pdf. * Royle (2004) Royle, J. A. (2004) N-mixture models for estimating population size from spatially replicated counts. Biometrics, 60, 108–115. * Rudd et al. (2016) Rudd, R., Seth, P., David, F. and Scholl, L. (2016) Increases in drug and opioid-involved overdose deaths - United States, 2010-2015. MMWR Morb Mortal Wkly Rep. * SAMHSA, Center for Behavioral Health Statistics and Quality (2003-2005, 2006-2008, 2009-2010, 2011-2014) SAMHSA, Center for Behavioral Health Statistics and Quality (2003-2005, 2006-2008, 2009-2010, 2011-2014) National survey on drug use and health. http://www.samhsa.gov/. * SAMHSA, Center for Behavioral Health Statistics and Quality (2015-2016, 2016-2017, 2017-2018, 2018-2019) — (2015-2016, 2016-2017, 2017-2018, 2018-2019) National survey on drug use and health, 2-year RDAS. http://www.samhsa.gov/. * Schaub and Abadi (2011) Schaub, M. and Abadi, F. (2011) Integrated population models: A novel analysis framework for deeper insights into population dynamics. Journal of Ornithology, 152, 227–237. * Scholl et al. (2019) Scholl, L., Seth, P., Kariisa, M., Wilson, N. and Baldwin, G. (2019) Drug and opioid-involved overdose deaths - united states, 2013-2017. MMWR, 67, 1419–1427. * Schuler et al. (2020) Schuler, M. S., Griffin, B. A., Cerdá, M., McGinty, E. E. and Stuart, E. A. (2020) Methodological challenges and proposed solutions for evaluating opioid policy effectiveness. Health Services and Outcomes Research Methodology. In press. * Slavova et al. (2015) Slavova, S., O’Brien, D. B., Creppage, K., Dao, D., Fondario, A., Haile, E., Hume, B., Largo, T. W., Nguyen, C., Sabel, J. C., Wright, D. and Members of the Council of State and Territorial Epidemiologists Overdose Subcommittee (2015) Drug overdose deaths: Let’s get specific. Public health reports, 130. * Sólymos et al. (2012) Sólymos, P., Lele, S. and Bayne, E. (2012) Conditional likelihood approach for analyzing single visit abundance survey data in the presence of zero inflation and detection error. Environmetrics, 23, 197–205. * Stoner et al. (2019) Stoner, O., Economou, T. and da Silva, G. D. M. (2019) A hierarchical framework for correcting under-reporting in count data. Journal of the American Statistical Association, 0, 1–17. * Townsend et al. (2021) Townsend, T., Kline, D., Rivera-Aguirre, A., Bunting, A., Mauro, P., Marshall, B., Martins, S. and Cerdá, M. (2021) Racial/ethnic and geographic trends in combined stimulant/opioid overdoses, 2007-2019. American Journal of Epidemiology. In press. * Zibbell et al. (2015) Zibbell, J., Igbal, K., Patel, R., Suryaprasad, A., Sanders, K., Moore-Moravian, L., Serrecchia, J., Blankenship, S., Ward, J., Holtzman, D. and Centers for Disease Control and Prevention (2015) Increases in hepatitis c virus infection related to injection drug use among persons aged $\geq$ 30 years - kentucky, tennessee, virginia, and west virginia, 2006-2012. MMWR, 64, 453–458. Supplemental Material (a) $\beta_{0}^{\mu}$ (b) $\beta_{1}^{\mu}$ (c) $\mu_{t}^{(1)}$ (d) $\mu_{t}^{(2)}$ Figure 1: Plots of the estimates of the intercept parameters for the $H=100$ simulated data sets. The vertical red lines correspond to the true values used to simulate the data. (a) Absolute Difference (b) Relative Difference Figure 2: Boxplots of the absolute and relative difference between the estimate of $\lambda_{it}$ and the true simulated value for each location in each of the $H=100$ simulated data sets. Vertical red lines correspond to the first quartile, median, and third quartile across all of the simulated data sets. (a) $N_{it}$ (b) $\lambda_{it}$ Figure 3: Scatterplots from randomly selected simulated data sets showing the estimated and true counts ($N_{it}$) and relative risks ($\lambda_{it}$). Red lines indicate when the estimates equal the truth. Table 1: Estimates of the average statewide prevalence of misuse over the given time frame along with standard errors of the estimates. Years Estimate Standard Error 2003 - 2006 0.05 0.0025 2007 - 2010 0.055 0.0026 2011 - 2014 0.052 0.0024 2015 - 2016 0.047 0.0041 2016 - 2017 0.051 0.0037 2017 - 2018 0.041 0.0033 2018 - 2019 0.043 0.0043 Figure 4: Maps of the estimated relative risk of PWMO, given by $\lambda_{it}$. Figure 5: Maps of the posterior standard deviation of the death rate, $p^{(D)}_{it}$. Figure 6: Maps of the posterior standard deviation of the treatment rate, $p^{(T)}_{it}$.
8k
arxiv_papers
2101.01242
# (In)equality distance patterns and embeddability into Hilbert spaces Alexandru Chirvasitu ###### Abstract We show that compact Riemannian manifolds, regarded as metric spaces with their global geodesic distance, cannot contain a number of rigid structures such as (a) arbitrarily large regular simplices or (b) arbitrarily long sequences of points equidistant from pairs of points preceding them in the sequence. All of this provides evidence that Riemannian metric spaces admit what we term loose embeddings into finite-dimensional Euclidean spaces: continuous maps that preserve both equality as well as inequality. We also prove a local-to-global principle for Riemannian-metric-space loose embeddability: if every finite subspace thereof is loosely embeddable into a common $\mathbb{R}^{N}$, then the metric space as a whole is loosely embeddable into $\mathbb{R}^{N}$ in a weakened sense. Key words: Riemannian manifold; geodesic; isometry; Euclidean distance MSC 2010: 30L05; 53B20; 53B21 ## Introduction The present note is a follow-up on [3], where the following notion was introduced ([3, Definition 2.2]): ###### Definition 0.1. Let $(X,d_{X})$ and $(Y,d_{Y})$ be two metric spaces. A continuous map $f:X\to Y$ is a loosely isometric (or just loose) embedding if $d_{X}(x,x^{\prime})\mapsto d_{Y}(fx,fx^{\prime})$ is a well-defined one-to-one map on the codomain of $d_{X}$. $(X,d_{X})$ is loosely embeddable (or LE) in $(Y,d_{Y})$ if it admits such a $f:X\to Y$, and it is just plain LE (without specifying $Y$) if it is loosely embeddable into some finite-dimensional Hilbert space. $\blacklozenge$ In other words, $f$ turns (un)equal distances into (un)equal distances respectively. Note in particular that loose embeddings are automatically one- to-one, so they are embeddings. It is the condition of being isometric that is being loosened. The concept was originally motivated by the fact that, by (a slight paraphrase of) [7, Corollary 4.9], LE compact metric spaces have quantum automorphism groups, i.e. they admit universal isometric actions by compact quantum groups. Despite its origin in the non-commutative-geometry considerations central to [3], the notion seems to hold some independent interest of its own. Roughly speaking: Loose embeddability captures the combinatorial patterns of distance (in)equality achievable in Hilbert spaces. We focus in particular on Riemannian compact metric spaces, i.e. those obtained by equipping Riemannian manifolds with their global geodesic distance (Definition 1.1). According to [4, Theorem 0.2] compact connected Riemannian manifolds (with or without boundary) always have quantum isometry groups, which in fact coincide with their classical isometry groups. This means that in the present context we are departing from the initial motivation for considering loose embeddability, namely the existence of quantum isometry groups. The concept nevertheless suggests some apparently-non-trivial questions in metric and Riemannian geometry. These seem particularly well suited for loose embeddability, as attested by several results ruling out non-LE metric configurations in the Riemannian context: * • Lemma 1.2 is the simple remark that 1-dimensional Riemannian manifolds are always loosely embeddable. * • In Proposition 1.3 we observe that a (compact) Riemannian metric space cannot contain $n$-tuples of equidistant points for arbitrarily large $n$. * • Generalizing this, Theorem 1.6 shows that compact Riemannian metric spaces do not contain arbitrarily large sets of pairs $\\{x_{i},y_{i}\\}$ of points with $x_{i}$ and $y_{j}$ both equidistant from $x_{i}$ and $y_{i}$ for all $j>i$. This rules out (in the Riemannian case) a subtler class of counterexamples to loose metric embeddability given by Lemma 1.5. In short, Riemannian compact metric spaces make for poor counterexamples to loose metric embeddability. Section 2 further contains a number of questions related to loose embeddability, and answer a weakened form of Question 2.1 affirmatively in the Riemannian setting: Theorem 2.4 says, roughly speaking, that if the finite subspaces of a compact Riemannian metric space $(M,d)$ are uniformly loosely embeddable (i.e. loosely embeddable into Hilbert spaces of uniformly bounded dimension) then $(M,d)$ itself is loosely embeddable in a weak sense. ### Acknowledgements This work was partially supported by NSF grants DMS-1801011 and DMS-2001128. ## 1 Distance configurations in Riemannian metric spaces It is a natural problem to determine to what extent various classes of metric spaces are loosely embeddable in the sense of Definition 0.1. Of special interest, for instance, are Riemannian manifolds equipped with the geodesic metric. [5, 1] are good sources for the Riemannian geometry we will peruse. ###### Definition 1.1. A Riemannian metric space is a connected Riemannian manifold equipped with the global geodesic metric. $\blacklozenge$ Unless specified otherwise, all of our metric spaces are assumed compact; we thus often drop that adjective for brevity. Note first: ###### Lemma 1.2. One-dimensional compact Riemannian metric spaces are loosely embeddable. ###### Proof. Indeed, 1-dimensional connected compact Riemannian manifolds are isometric to the unit circle, which can be loosely embedded into the plane via its standard origin-centered-unit-circle realization. $\blacksquare$ It is observed in [3, Example 2.3] that metric spaces containing regular $n$-simplices (i.e. point $n$-tuples with equal pairwise distances) for each $n$ cannot be loosely embeddable. The following observation rules this out for Riemannian metric spaces. ###### Proposition 1.3. Let $(M,d)$ be a compact Riemannian manifold with its geodesic metric. There is an upper bound on the number of vertices of a regular simplex in $M$. ###### Proof. Now let $v_{1}$ and $v_{2}$ be two other vertices of $\Delta$, chosen so that the angle $\varepsilon=\measuredangle v_{1}v_{0}v_{2}$ is sufficiently small (possible for large $n$). Since $M$ is compact there is a global lower bound $K$ for its sectional curvature. By the Toponogov comparison theorem ([1, §6.4.1, Theorem 73]) the length of $v_{1}v_{2}$ is bounded above by the length of the third edge in an isosceles triangle with angle $\varepsilon$ subtending the two edges of equal length $\ell=v_{0}v_{1}=v_{0}v_{2}$ in the space form [1, §6.3.2] of constant curvature $K$. This length goes to $0$ as $\varepsilon$ does, contradicting $v_{1}v_{2}=\ell$. $\blacksquare$ Large regular simplices are not the only obstruction to loose embeddability. The somewhat more sophisticated configurations that pose problems involve, roughly speaking, large sets of points each equidistant to large sets of pairs of points. To make sense of this we need some terminology. ###### Definition 1.4. Let $n$ be a positive integer. An $n$-flag of median hyperplanes is a collection of points $\\{p_{i},q_{i},\ 0\leq i\leq n-1\\}$ (1-1) such that $d(z,p_{s})=d(z,q_{s})$ for all $z=p_{i}$ or $q_{i}$ with $i>s$. $\blacklozenge$ The term ‘median hyperplane’ is meant to invoke the locus of points in a Euclidean space that are equidistant from two given points, while ‘flag’ means chain ordered by inclusion, as in $\\{p_{i},q_{i}\\}_{i\geq 0}\supset\\{p_{i},q_{i}\\}_{i\geq 1}\supset\cdots.$ (1-2) The relevance of the concept stems from the following simple remark. ###### Lemma 1.5. A compact metric space containing $n$-flags of median hyperplanes is not LE. ###### Proof. If such a space $(X,d)$ were loosely embeddable in ${\mathbb{R}}^{d}$ say, then each of the sets 1-2 would be contained in a hyperplane of ${\mathbb{R}}^{d}$, namely the median hyperplane of (the images in ${\mathbb{R}}^{d}$ of) $p_{i}$ and $q_{i}$. These hyperplanes would be orthogonal in the sense that their range projections commute, so any $>d$ of them would intersect trivially. $\blacksquare$ On the other hand, Riemannian manifolds can still not be discounted as LE on the basis of Lemma 1.5. ###### Theorem 1.6. A compact Riemannian manifold $(X,d)$ equipped with the geodesic metric cannot contain $n$-flags of median hyperplanes for arbitrarily large $n$. This will require some amount of preparation. First, we will have some make some size estimates (for angles, distances, etc.). This raises the usual issue of starting with quantities that are within $\varepsilon>0$ of each other and then obtaining new estimates in terms of $\varepsilon$ such as, say $C\varepsilon$ for some constant $C$. In order to avoid such irrelevancies we make the following ###### Convention 1.7. $\varepsilon$ will typically denote a small positive real, and whenever a new small quantity depending on $\varepsilon$ is introduced, we denote it by decorating $\varepsilon$ with the usual symbols used to indicate differentiation. So for instance $\varepsilon^{\prime}$, $\varepsilon^{\prime\prime}$, $\varepsilon^{(5)}$, etc. all denote small positive reals depending on $\varepsilon$ in some unspecified fashion. The same notational convention applies to other symbols meant to denote small positive reals. $\blacklozenge$ In the discussion below we will modify the Riemannian tensor $g$ on a geodesic ball of a Riemannian manifold $(M,g)$ so as to “flatten” said ball. The relevant concept is ###### Definition 1.8. Let $B\subset M$ be a geodesic ball in a Riemannian manifold $M$ with tensor $g$, and suppose we have fixed a coordinate system for $B$. We say that $g$ is $\varepsilon$-Euclidean to order $k$ along $B$ if the derivatives of orders $\leq k$ of $g$ within $\varepsilon$ of their usual Euclidean counterparts, uniformly on $B$, in the respective coordinate system. We typically omit $k$ from the discussion, simply assuming it is large enough ($k\geq 2$ will do for most of our purposes); for that reason, we abbreviate the phrase as $\varepsilon$-Euclidean. The specific $\varepsilon>0$ will also depend on the chosen coordinates, but we ignore this issue too, as the discussion below will only require $\varepsilon$ sufficiently small, and the various coordinate choices will not affect this. $\blacklozenge$ As a consequence of the smooth dependence of ODE solutions on the initial data (e.g. [6, Theorem B.3]), “sufficiently Euclidean” Riemannian metrics in the sense of Definition 1.8 have “sufficiently straight” geodesics. More formally (keeping in mind Convention 1.7): ###### Proposition 1.9. Let $(M,g)$ be a Riemannian manifold, $\varepsilon$-Euclidean with respect to some coordinate system. Then, for every geodesic $\gamma$ in $M$, parallel transport of vectors along $\gamma$ does not alter angles by more than $\varepsilon^{\prime}$ ###### Notation 1.10. Let $M$ be a Riemannian manifold with metric tensor $g$ and geodesic distance $d$. We write $\mathrm{inj}(M)\ :=\text{ {\it injectivity radius} of }M$ ([5, p.271] or [1, p.142, Definition 23]): the largest number such that all pairs of points less than $\mathrm{inj}(M)$ apart are joined by a unique geodesic segment. For points $p,q$ in a Riemannian manifold $M$ with $\ell:=d(p,q)<\mathrm{inj}(M)$ we write $\gamma_{p}^{q}:[0,\ell]\to M$ for the geodesic arc from $p$ to $q$, parametrized by arclength. We will also abuse notation and denote the image of $\gamma_{p}^{q}$ by the same symbol. $\blacklozenge$ ###### Definition 1.11. Let $p,q$ be points in a Riemannian manifold $M$, less than $\mathrm{inj}(M)$ apart. The angle $\angle(v_{p},v_{q})$ between two tangent vectors $v_{p}\in T_{p}M$ and $v_{q}\in T_{q}M$ is defined by * • parallel-transporting ([5, Chapter 2, Proposition 2.6 and Definition 2.5] or [1, p.264, Proposition 61]) the unit velocity vector $v_{q}$ to a vector $v\in T_{p}M$ along $\gamma_{p}^{q}$; * • set $\angle(v_{p},v_{q}):=\text{ angle between }v_{p}\text{ and }v,$ computed in $T_{p}(M)$ as usual, via the Riemannian tensor. For points $p$, $q$, $p^{\prime}$, $q^{\prime}$ in $M$, each two less than $\mathrm{inj}(M)$ apart, the angle $\angle(\gamma_{p}^{q},\gamma_{p^{\prime}}^{q^{\prime}})$ is the angle (defined as above) between the unit velocity vectors $(\gamma_{p}^{q})^{\prime}(0)$ and $(\gamma_{p^{\prime}}^{q^{\prime}})^{\prime}(0)$. $\blacklozenge$ ###### Remark 1.12. Although Definition 1.11 appears to bias one of the pairs $p,q$ and $p^{\prime},q^{\prime}$ over the other, the notion is in fact symmetric: because parallel transport is an isometry between tangent spaces, whether we parallel-transport $(\gamma_{p^{\prime}}^{q^{\prime}})^{\prime}(0)\text{ to }T_{p}M$ or $(\gamma_{p}^{q})^{\prime}(0)\text{ to }T_{p^{\prime}}M$ does not affect the value of the angle. $\blacklozenge$ For an $n$-dimensional Riemannian metric space $(M,d=d_{M})$ with a basepoint $z\in M$ we will consider small geodesic balls $B_{r}=B_{r}(z):=\\{q\in M\ |\ d(z,q)\leq r\\}$ centered at $z$, parametrized with normal coordinates [1, §4.4.1] $x^{i}$, $1\leq i\leq n$ (so $z$ is identified with the origin $(0,\cdots,0)$). Recall that this means the geodesics emanating from $z$ are identified with straight line segments. Having fixed such a coordinate system, we can speak about segments in $B$, angles between those segments, etc.; it will be clear from context when these are actual segments in the ambient ${\mathbb{R}}^{n}$ housing $B$ rather than, say, geodesic segments in $M$. Typically, the radius $r$ decorating $B_{r}$ will be small. We will occasionally have to normalize the Riemannian metric in $B_{r}$, scaling distances from the origin $z=0\in B$ by $\frac{1}{r}$ so that the new ball ${}_{n}B_{r}$ (‘n’ for ‘normalized’) has radius $1$. This normalization procedure has the effect of “flattening” the Riemannian metric, in the sense that the Riemannian structure can be made arbitrarily $\varepsilon$-Euclidean (Definition 1.8) as $r\to 0$. In the discussion below, for a Riemannian manifold $M$ with geodesic metric $d=d_{M}$, we write $\eta(p,q)=\eta_{M}(p,q):=d(x,y)^{2}$ (1-3) for the squared-distance function (the notation matches that in [10] for instance, where this function features prominently). ###### Lemma 1.13. Let $M$ be a Riemannian manifold and $B=B_{r}(z)$ a sufficiently small geodesic ball equipped with normal coordinates around $z\in M$. Let also $p\in B$ be a point and consider the function $\psi:x\mapsto\eta(x,p).$ with $\eta$ as in 1-3. Denoting by $v\in T_{z}M$ the unit vector tangent to the geodesic $z\to p$, the gradient $\nabla\psi$ at $z$ equals $-2d(z,p)v$. ###### Proof. This is immediate after choosing a normal coordinate system around $p$, whereupon $\psi$ becomes $\psi:(x^{1},\cdots,x^{n})\to\sum_{i=1}^{n}(x^{i})^{2}.$ $\blacksquare$ ###### Proof of Theorem 1.6. Suppose we do have arbitrarily large flags of median hyperplanes in our compact Riemannian space $(M,d)$. Since $M$ is compact, we can assume that some large flag 1-1 is contained entirely within some small geodesic ball $B_{r}$ centered at a point $z:=p_{n}$ constituting the flag. We can assume $r$ is small enough that the normalized ball ${}_{n}B_{r}$ is $\varepsilon$-Euclidean in the sense of Definition 1.8. Furthermore, because the size of the flag can also be chosen arbitrarily large, we can also assume that $\angle\left(\gamma_{p_{i}}^{q_{i}},\ \gamma_{p_{j}}^{q_{j}}\right)<\varepsilon^{\prime},\ \forall 0\leq i\neq j<n$ Henceforth, it will be enough to work with $p_{i}$ and $q_{i}$ for $i=0,1$. By the flag condition, both $p_{1}$ and $q_{1}$ are equidistant from $p_{0}$ and $q_{0}$. Additionally, we have $\angle\left(\gamma_{p_{1}}^{q_{1}},\ \gamma_{p_{0}}^{q_{0}}\right)<\varepsilon^{\prime}$ (1-4) Furthermore, because the metric is $\varepsilon$-Euclidean, the unit-length velocity vectors $v_{x}$ along $\gamma:=\gamma_{p_{1}}^{q_{1}}$ stay within an angle of $\varepsilon^{\prime\prime}$ of the initial velocity vector $(\gamma_{p_{1}}^{q_{1}})^{\prime}(0)$ (by Proposition 1.9), so 1-4 implies that $\angle\left(v_{x},\ (\gamma_{p_{0}}^{q_{0}})^{\prime}(0)\right)<\varepsilon^{(3)},\ \forall x\in\gamma_{p_{1}}^{q_{1}}.$ (1-5) For each $x\in\gamma$, we saw in Lemma 1.13 that the gradient of the function $\psi:x\mapsto d(x,p_{0})^{2}-d(x,q_{0})^{2}$ (1-6) is $2d(x,q_{0})(\gamma_{x}^{q})^{\prime}(0)-2d(x,p_{0})(\gamma_{x}^{p})^{\prime}(0).$ (1-7) This is the parallel transport of $2\overrightarrow{p_{0}q_{0}}$ to $x$ in the usual, Euclidean metric, so by our assumption that the original metric is $\varepsilon$-Euclidean the angle between 1-7 and $(\gamma_{p_{0}}^{q_{0}})^{\prime}(0)$ is $<\varepsilon^{(4)}$. To summarize, we have * • a small angle between each gradient $\nabla_{x}\psi$ of $\psi$ along $\gamma$, given by 1-7, and $(\gamma_{p_{0}}^{q_{0}})^{\prime}(0)$; * • a small angle between the latter and the unit tangent vectors $v_{x}$ at $x\in\gamma$, by 1-5. In particular, at each $x$ along $\gamma$ the gradient $\nabla_{x}\psi$ and the velocity along $\gamma$ have positive inner product. This means that the function $\psi$ in 1-6 increases strictly along the geodesic $\gamma$, contradicting the fact that it must take the value $0$ at both endpoints $p_{1}$ and $q_{1}$. $\blacksquare$ ## 2 Questions Proposition 1.3 and Theorem 1.6 seem to suggest that compact Riemannian metric spaces are particularly amenable to loose metric embeddability. I do not know whether they are always LE, but that problem decomposes naturally: first, ###### Question 2.1. Let $(X,d)$ be a compact metric space and $N\in{\mathbb{Z}}_{>0}$ a positive integer such that every finite subspace of $(X,d)$ is loosely embeddable into ${\mathbb{R}}^{N}$. Does it follow that $(X,d)$ itself is LE? In other words, does uniform loose embeddability for the finite subspaces of $(X,d)$ entail the LE property for $X$ as a whole? Secondly, to circle back to the Riemannian context: ###### Question 2.2. Do compact Riemannian metric spaces satisfy the hypothesis of Question 2.1? We conclude with a partial answer to Question 2.1. First, we need ###### Definition 2.3. A metric space $(X,d_{X})$ is weakly loosely embeddable (or weakly LE) in the metric space $(Y,d_{Y})$ if there is an injective map $f:X\to Y$ satisfying only the backwards implication in the biconditional implicit in Definition 0.1: $d_{Y}(fx,fx^{\prime})=d_{Y}(fz,fz^{\prime})\Leftarrow d_{X}(x,x^{\prime})=d_{X}(z,z^{\prime}).$ (2-1) $\blacklozenge$ ###### Theorem 2.4. Under the hypotheses of Question 2.1, a compact Riemannian metric space is weakly LE in ${\mathbb{R}}^{N}$. ###### Proof. Let $(M,d)$ be a compact Riemannian manifold with its geodesic metric and denote by $({\mathcal{F}},\subseteq)$ the poset of finite subsets $F\subset M$ (ordered by inclusion). For each $F\in{\mathcal{F}}$ we fix a map $\psi_{F}:F\to B:=\text{ origin-centered unit ball in }{\mathbb{R}}^{N}$ such that * • $\psi_{F}$ is a loose embedding of $(F,d)$, rescaled if needed so as to ensure it lands in the ball $B$; * • the diameter of $\psi_{F}(F)$ is precisely $1$, with $\psi_{F}p=0$ and $\psi_{F}q$ on the unit sphere $\partial B$ for some $p,q\in F$. This gives us an ${\mathcal{F}}$-indexed net [9, Chapter 3, p.187] $\psi_{F}$ of maps $F\to B$, and since * • $B$ is compact; * • every element $p\in M$ belongs to sufficiently large $F\in{\mathcal{F}}$, i.e. to the upward-directed set $\\{F\in{\mathcal{F}}\ |\ p\in F\\},$ we can take the pointwise limit $\psi(p):=\lim_{{\mathcal{F}}}\psi_{F}(p)\in B$ to obtain a map $\psi:M\to B$. it remains to prove that $\psi$ 1. (a) satisfies the weak LE condition 2-1; 2. (b) is continuous; 3. (c) is one-to-one. a: condition 2-1. We want to prove that $|\psi x-\psi x^{\prime}|=|\psi z-\psi z^{\prime}|\Leftarrow d_{M}(x,x^{\prime})=d_{M}(z,z^{\prime})$ (2-2) holds; this follows by passing to the limit over $F\in{\mathcal{F}}$ in the analogous implication for the partially-defined maps $\psi_{F}:F\to B$. We can now define a map $\varphi:(\text{set of distances }d_{M}(p,q))\to{\mathbb{R}}_{\geq 0}$ (2-3) by $\varphi(d_{M}(p,q))=|\psi p-\psi q|.$ (2-4) We define the maps $\varphi_{F}$, $F\in{\mathcal{F}}$ similarly, substituting $\psi_{F}$ for $\psi$ in 2-4. b: $\psi$ is continuous. We have to argue that $\lim_{d\to 0}\varphi(d)=0.$ If not, we can find a subnet $(F_{\alpha})_{\alpha}$ of ${\mathcal{F}}$ and points $p_{\alpha},q_{\alpha}\in F_{\alpha}$ such that $d_{M}(p_{\alpha},q_{\alpha})\to 0$ but $\varepsilon:=\inf_{\alpha}|\psi_{F_{\alpha}}p_{\alpha}-\psi_{F_{\alpha}}q_{\alpha}|>0;$ (2-5) we abbreviate $\psi_{\alpha}:=\psi_{F_{\alpha}},$ and similarly for $\varphi$. If $\ell>0$ is sufficiently small (smaller than the injectivity radius of $M$, for instance [2, Definition following Theorem III.2.3]), then $(M,d_{M})$ contains geodesic triangles with edges $\ell,\ \ell,\ t$ for every $2\ell>t>0$. This can easily be seen, for instance, by continuously decreasing the angle between two length-$\ell$ geodesic rays based at a point from $\pi$ to $0$; the distance between the extremities of those geodesic rays will then decrease continuously from $2\ell$ to $0$. Now fix some $\ell>0$, sufficiently small. We will have $d_{M}(p_{\alpha},q_{\alpha})<2\ell$ for sufficiently large $\alpha$, and hence, by the preceding remark, we can find geodesic triangles in $M$ with edges $\ell$, $\ell$ and $d_{M}(p_{\alpha},q_{\alpha})$ (assuming also that $\alpha$ is large enough to ensure that $F_{\alpha}$ contains the tip of that isosceles geodesic triangle). Applying $\psi_{\alpha}$, we have a triangle in $B$ with edges $\varphi_{\alpha}(\ell),\ \varphi_{\alpha}(\ell),\ \varphi_{\alpha}(d_{M}(p_{\alpha},q_{\alpha})).$ In particular, we have $\varphi_{\alpha}(\ell)\geq\frac{\varphi_{\alpha}(d_{M}(p_{\alpha},q_{\alpha}))}{2}\geq\frac{\varepsilon}{2}>0$ (2-6) by 2-5. Since $\ell>0$ was arbitrary (so long as it was small enough), this means that by passing to large enough $\alpha$ we can find arbitrarily large finite subsets $F$ of $M$, of girth $\geq\ell$ (i.e. so that all pairs of points are at least $\ell$ apart), and hence so that (by 2-6) $|\psi p-\psi q|\geq\frac{\varepsilon}{2},\ \forall p,q\in F.$ Since the cardinality of $F$ (and hence that of $\psi(F)$) can be made arbitrarily large, we are contradicting the compactness of $B$. This completes the proof of b above. c: $\psi$ is injective. Suppose not. In a sense, this means we are in precisely the opposite situation to that encountered in the proof of part b: there is a subnet $(F_{\alpha})_{\alpha}$ of ${\mathcal{F}}$ with points $p_{\alpha},\ q_{\alpha}\in F_{\alpha}$ such that $\displaystyle\ell:=\inf_{\alpha}d_{M}(p_{\alpha},q_{\alpha})$ $\displaystyle>0$ $\displaystyle\inf_{\alpha}|\psi_{\alpha}p_{\alpha}-\psi_{\alpha}q_{\alpha}|$ $\displaystyle=0.$ (2-7) By compactness, we can also assume $p_{\alpha}$ and $q_{\alpha}$ are convergent and hence in particular that the distances $\ell_{\alpha}:=d_{M}(p_{\alpha},q_{\alpha})$ are as well. For sufficiently small $t>0$ there are triangles in $M$ with edges $\ell_{\alpha},\ \ell_{\alpha},\ t$ for all $\alpha$ (consider two geodesic rays of length $\ell_{\alpha}$ with common origin, subtending small angles at said origin). But then an application of one of the $\psi_{\alpha}$ will yield a triangle with edges $\varphi_{\alpha}(\ell_{\alpha}),\ \varphi_{\alpha}(\ell_{\alpha}),\ \varphi_{\alpha}(t)$ with $\varphi$ and $\varphi_{\alpha}:=\varphi_{F_{\alpha}}$ as in 2-4 and subsequent discussion, meaning that $\varphi_{\alpha}(t)\leq\frac{\varphi_{\alpha}(\ell_{\alpha})}{2}.$ Since the right hand side converges to zero by 2-7, we conclude that $\varphi(t)=0$ for all sufficiently small $t>0$. In other words, $\psi\text{ identifies any two points that are sufficiently close.}$ (2-8) Now, for each $\alpha$ we also have, by assumption, points $x_{\alpha},y_{\alpha}$ in $F_{\alpha}$ that achieve distance $1$ upon applying $\psi_{\alpha}$: $|\psi_{\alpha}x_{\alpha}-\psi_{\alpha}y_{\alpha}|=1$ By compactness, passage to a subnet if necessary allows us to assume that $x_{\alpha}$ and $y_{\alpha}$ converge to $x$ and $y$ in $M$ respectively, and limiting over $\alpha$ produces $|\psi x-\psi y|=1$. For some distance $t>0$ small enough to qualify for 2-8 we can find a broken geodesic consisting of some finite number $N$ of length-$t$ segments $x=:p_{0}\to p_{1},\quad p_{1}\to p_{2},\ \cdots,\ p_{N-1}\to p_{N}:=y$ connecting $x$ and $y$. Applying $\psi$ we similarly obtain a broken geodesic consisting of $N$ length-$\varphi(t)$ segments connecting $x$ and $y$, but the latter are distance $1$ apart while $\varphi(t)=0$ by 2-8. This gives the contradiction we seek and finishes the proof. $\blacksquare$ ###### Remark 2.5. As noted above, compact connected Riemannian manifolds are known to have quantum isometry groups and the latter are classical. We note however that the results proven above for Riemannian manifolds involve only local considerations. This means that, for instance, Propositions 1.3, 1.6 and 2.4 apply to submanifolds with corners of compact Riemannian manifolds. Recall [8, Definition 2.1] that the latter are manifolds modeled as usual, via atlases, on the spaces ${\mathbb{R}}_{\geq 0}^{k}\times{\mathbb{R}}^{n-k}$. One can obtain interesting metric spaces by * • starting with a Riemannian manifold; * • cutting out a domain bounded by hypersurfaces intersecting transversally ; * • restricting the global geodesic metric to that domain. As soon as one goes beyond manifolds with boundary such spaces are not covered by the main results of [4]. $\blacklozenge$ ## References * [1] Marcel Berger. A panoramic view of Riemannian geometry. Springer-Verlag, Berlin, 2003. * [2] Isaac Chavel. Riemannian geometry—a modern introduction, volume 108 of Cambridge Tracts in Mathematics. Cambridge University Press, Cambridge, 1993. * [3] Alexandru Chirvasitu. Quantum isometries and loose embeddings, 2020. arXiv:2004.09962. * [4] Alexandru Chirvasitu and Debashish Goswami. Existence and Rigidity of Quantum Isometry Groups for Compact Metric Spaces. Comm. Math. Phys., 380(2):723–754, 2020. * [5] Manfredo Perdigão do Carmo. Riemannian geometry. Mathematics: Theory & Applications. Birkhäuser Boston, Inc., Boston, MA, 1992. Translated from the second Portuguese edition by Francis Flaherty. * [6] J. J. Duistermaat and J. A. C. Kolk. Lie groups. Universitext. Springer-Verlag, Berlin, 2000. * [7] Debashish Goswami. Existence and examples of quantum isometry groups for a class of compact metric spaces. Adv. Math., 280:340–359, 2015. * [8] Dominic Joyce. On manifolds with corners. In Advances in geometric analysis, volume 21 of Adv. Lect. Math. (ALM), pages 225–258. Int. Press, Somerville, MA, 2012. * [9] James R. Munkres. Topology. Prentice Hall, Inc., Upper Saddle River, NJ, 2000. Second edition of [ MR0464128]. * [10] Liviu I. Nicolaescu. Random morse functions and spectral geometry, 2012. arXiv:1209.0639. Department of Mathematics, University at Buffalo, Buffalo, NY 14260-2900, USA E-mail address: [email protected]
4k
arxiv_papers
2101.01247
remarkRemark hypothesisHypothesis claimClaim Block Bidiagonalization for Matrix ApproximationE. Hallman ex_supplement # A Block Bidiagonalization Method for Fixed-Accuracy Low-Rank Matrix Approximation††thanks: This research was supported in part by the National Science Foundation through grant DMS-1745654. Eric Hallman North Carolina State University (, https://erhallma.math.ncsu.edu/). [email protected] ###### Abstract We present randUBV, a randomized algorithm for matrix sketching based on the block Lanzcos bidiagonalization process. Given a matrix ${\bf A}$, it produces a low-rank approximation of the form ${\bf UBV}^{T}$, where ${\bf U}$ and ${\bf V}$ have orthonormal columns in exact arithmetic and ${\bf B}$ is block bidiagonal. In finite precision, the columns of both ${\bf U}$ and ${\bf V}$ will be close to orthonormal. Our algorithm is closely related to the randQB algorithms of Yu, Gu, and Li (2018) in that the entries of ${\bf B}$ are incrementally generated and the Frobenius norm approximation error may be efficiently estimated. It is therefore suitable for the fixed-accuracy problem, and so is designed to terminate as soon as a user input error tolerance is reached. Numerical experiments suggest that the block Lanczos method is generally competitive with or superior to algorithms that use power iteration, even when ${\bf A}$ has significant clusters of singular values. ###### keywords: randomized algorithm, low-rank matrix approximation, fixed-accuracy problem, block Lanczos 15A18, 15A23, 65F15, 65F30, 68W20 ## 1 Introduction In this paper we consider the problem of finding a quality low-rank approximation $\widetilde{{\bf A}}_{r}$ to a given matrix ${\bf A}\in\mathbb{R}^{m\times n}$, where we assume that $m\geq n$. In particular we consider the fixed-accuracy problem, where the desired truncation rank $r$ is not known in advance, but we instead want to find the smallest possible $r$ such that $\|{\bf A}-\widetilde{{\bf A}}_{r}\|_{F}<\tau$ for some tolerance $\tau$. The optimal approximation can be found by computing and truncating the SVD of ${\bf A}$, but when ${\bf A}$ is large this method may be impractically expensive. It is therefore increasingly common to use randomized techniques to find an approximation to the dominant subspace of ${\bf A}$: that is, to find a matrix ${\bf Q}\in\mathbb{R}^{m\times r}$ with orthonormal columns so that [12] (1) ${\bf A}\approx{\bf QB},$ where ${\bf B}$ is an $r\times n$ matrix satisfying (2) ${\bf B}={\bf Q}^{T}{\bf A}.$ Two variants on this basic approach are randomized subspace iteration and randomized block Lanczos. Algorithms 1 and 2 present prototype algorithms for each of these methods for the fixed-rank problem, where $r$ is specified in advance. Algorithm 1 Randomized Subspace Iteration (randQB) [12, Alg. 4.3] 0: ${\bf A}\in\mathbb{R}^{m\times n}$, rank $r$, integer $\ell\geq r$, power parameter $p\geq 0$ 0: ${\bf Q}\in\mathbb{R}^{m\times\ell}$ with orthonormal columns, ${\bf B}\in\mathbb{R}^{\ell\times n}$ 1: Draw a random standard Gaussian matrix ${\bf\Omega}\in\mathbb{R}^{n\times\ell}$ 2: Form ${\bf Y}=({\bf AA}^{T})^{p}{\bf A\Omega}$ 3: Compute the QR factorization ${\bf Y}={\bf QR}$ 4: ${\bf B}={\bf Q}^{T}{\bf A}$ Algorithm 2 Randomized Block Lanczos [32, Alg. 1] 0: ${\bf A}\in\mathbb{R}^{m\times n}$, block size $b\geq 1$, rank $r$, iterations $q$ such that $(q+1)b\geq r$ 0: ${\bf Q}\in\mathbb{R}^{m\times(q+1)b}$ with orthonormal columns, ${\bf B}\in\mathbb{R}^{(q+1)b\times n}$ 1: Draw a random standard Gaussian matrix ${\bf\Omega}\in\mathbb{R}^{n\times b}$ 2: Form ${\bf Y}=[{\bf A\Omega},({\bf AA}^{T}){\bf A\Omega},\ldots,({\bf AA}^{T})^{q}{\bf A\Omega}]$ 3: Compute the QR factorization ${\bf Y}={\bf QR}$ 4: ${\bf B}={\bf Q}^{T}{\bf A}$ Extensions of these algorithms to the fixed-accuracy problem make use of the fact that the columns of ${\bf Q}$ and rows of ${\bf B}$ can be computed incrementally rather than all at once. The process can then be terminated once a user-specified error threshold has been reached, assuming the error can be efficiently computed or estimated. Algorithms for the fixed-accuracy problem are proposed in [12, 17], and more recently by Yu, Gu, and Li in [31]. One algorithm by the latter authors, randQB_EI, is currently the foundation for the MATLAB function svdsketch [18]. The algorithms cited above all rely on subspace iteration rather than the block Lanczos method, despite the fact that Krylov subspace methods are “the classical prescription for obtaining a partial SVD” [12], as with svds in MATLAB. One justification for the focus on subspace iteration is that convergence analysis is more complete. In particular, the block Lanczos method converges slowly when the spectrum of ${\bf A}$ has a cluster larger than the block size $b$, and the convergence analysis becomes more complicated in this situation. In recent years, however, several works have improved the analysis for randomized block Lanczos. Analyzing Algorithm 2 for the case $b\geq r$, Musco and Musco [19] derive bounds on the approximation error that do not depend on the gaps between the singular values of ${\bf A}$. Yuan, Gu, and Li [32] derive results under the more general condition where ${\bf A}$ has no singular values with multiplicity greater than $b$. Both papers focus mostly on theoretical results, but the latter authors make the following observation: > “A practical implementation of [Algorithm 2] should involve, at the very > least, a reorganization of the computation to use the three-term recurrence > and bidiagonalization [7], and reorthogonalization of the Lanczos vectors at > each step using one of the numerous schemes that have been proposed [7, 21, > 24].” The goal of this paper is to provide a practical implementation of Algorithm 2, along with a method for efficiently estimating the Frobenius norm approximation error. ### 1.1 Contributions Our main contribution is the algorithm randUBV (Algorithm 6), which uses the block Lanczos method to solve the fixed accuracy problem. It is for the most part a straightforward combination of the block Lanzcos bidiagonalization process [8] shown in Algorithm 4 with a randomized starting matrix ${\bf V}_{1}={\bf\Omega}$. As such, it yields a factorization of the form ${\bf UBV}^{T}$, where ${\bf U}$ and ${\bf V}$ have orthonormal columns in exact arithmetic and ${\bf B}$ is block bidiagonal. Our secondary contribution is Theorem 4.4, which establishes bounds on the accuracy of the Frobenius norm error estimate (8). Our algorithm has two notable features that make it competitive with methods based on subspace iteration: * • It accepts block sizes smaller than the target rank. Contrary to what an exact arithmetic analysis would suggest, the block Lanczos method can find multiple singular values of ${\bf A}$ even when the multiplicity is greater than the block size $b$. Large clusters in the spectrum of ${\bf A}$ are inconvenient, but not fatal. We can therefore compare randUBV with adaptive methods such as randQB_EI when the two are run with the same block size. They will have the same cost per iteration when the latter algorithm is run with power parameter $p=0$, and empirically randUBV converges faster. If randQB_EI instead uses $p=1$ or $p=2$ then randUBV empirically requires more iterations to converge, but each iteration costs significantly less. * • It uses one-sided reorthogonalization, wherein ${\bf V}$ is reorthogonalized but ${\bf U}$ is not. This technique was recommended in [25] for the single- vector case (i.e., $b=1$), and leads to considerable cost savings when ${\bf A}$ is sparse and $m\gg n$. If $m\ll n$, our algorithm should be run on ${\bf A}^{T}$ instead. The matrix ${\bf U}$ may slowly lose orthogonality in practice, but Theorem 4.4 shows that our error estimate (8) will still remain accurate. For simplicity, we use full reorthogonalization on ${\bf V}$ as opposed to more carefully targeted methods such as those discussed in [21, 24]. One other design choice merits discussion: deflation occurs when the blocks produced by the block Lanczos method are nearly rank-deficient and results in a reduction in the block size. In the event of deflation, we propose to augment the block Krylov space in order to keep the block column size constant. This will prevent the process from terminating early in extreme cases such as when ${\bf A}$ is the identity matrix. Numerical experiments on synthetic and real data suggest that randUBV generally compares favorably with randQB and its variants, at least on modestly sized problems. ### 1.2 Outline The paper is organized as follows. In section 2, we review the background of QB algorithms for the fixed-accuracy problem as well as the block Lanczos method. In section 3 we discuss several implemenation details including the choice of block size, deflation and augmentation, and one-sided reorthogonalization. We present our main algorithm in section 4 and establish the accuracy of the error indicator. Our numerical experiments are in section 5, and section 6 offers our concluding remarks and some avenues for future exploration. ### 1.3 Notation Matrices, vectors, integers, and scalars will be respectively denoted by ${\bf A}$, ${\bf a}$, $a$, and $\alpha$. We use $\|{\bf A}\|_{F}$ and $\|{\bf A}\|_{2}$ for the Frobenius norm and operator norm, respectively, and ${\bf I}$ for the identity matrix whose dimensions can be inferred from context. We use MATLAB notation for matrix indices: i.e., ${\bf A}(i,j)$ and ${\bf A}(:,j)$ respectively represent the $(i,j)$ element and the $j$-th column of ${\bf A}$. For the cost analysis of our algorithm we use the same notation as in [17, 31]: $C_{\text{mul}}$ and $C_{\text{qr}}$ will represent constants so that the cost of multiplying two dense matrices of sizes $m\times n$ and $n\times l$ is taken to be $C_{\text{mul}}mnl$ and the cost of computing the QR factorization of an $m\times n$ matrix with $m\geq n$ is taken to be $C_{\text{qr}}mn^{2}$, or $C_{\text{qrcp}}mn^{2}$ if column pivoting is used. ## 2 Background In this section we review the fixed-accuracy QB factorization algorithm randQB_EI and the block Lanczos bidiagonalization process. ### 2.1 A fixed-accuracy QB algorithm In order to extend Algorithm 1 to the fixed-accuracy problem, Yu, Gu, and Li [31] make use of two key ideas. First, for a given block size $b\leq\ell$ the matrix ${\bf\Omega}$ can be generated $b$ columns at a time rather than all at once, allowing the resulting factors ${\bf Q}$ and ${\bf B}$ to be generated incrementally. Second, since ${\bf Q}$ has orthonormal columns and ${\bf B}={\bf Q}^{T}{\bf A}$, it follows [31, Thm. 1] that (3) $\|{\bf A}-{\bf QB}\|_{F}^{2}=\|{\bf A}-{\bf QQ}^{T}{\bf A}\|_{F}^{2}=\|{\bf A}\|_{F}^{2}-\|{\bf QQ}^{T}{\bf A}\|_{F}^{2}=\|{\bf A}\|_{F}^{2}-\|{\bf B}\|_{F}^{2}.$ As long as the columns of ${\bf Q}$ are kept close to orthonormal, the Frobenius norm error can be efficiently estimated at each step simply by updating $\|{\bf B}\|_{F}$. It is therefore possible to compute the low-rank factorization ${\bf QB}$ and cheaply estimate its error without ever forming the error matrix ${\bf A}-{\bf QB}$ explicitly. Algorithm randQB_EI incorporates both of these ideas, the second of which is particularly useful when ${\bf A}$ is sparse. Algorithm 3 presents code for randQB_EI, which in exact arithmetic will output the same ${\bf QB}$ factorization as randQB when run to the same rank. It is noted in [12] that a stable implementation of Algorithm 1 should include a reorthogonalization step after each application of ${\bf A}$ or ${\bf A}^{T}$. The reorthogonalization step in Line 10 provides further stability. Algorithm 3 Blocked randQB algorithm (randQB_EI) [31, Alg. 2] 0: ${\bf A}\in\mathbb{R}^{m\times n}$, block size $b\geq 1$, power parameter $p\geq 0$, tolerance $\tau$ 0: ${\bf Q}\in\mathbb{R}^{m\times\ell}$, ${\bf B}\in\mathbb{R}^{\ell\times n}$, such that $\|{\bf A}-{\bf QB}\|_{F}<\tau$ 1: ${\bf Q}=[\ ]$, ${\bf B}=[\ ]$ 2: $E=\|{\bf A}\|_{F}^{2}$(Approximate costs) 3: for $k=1,2,3,\ldots$ do 4: Draw a random standard Gaussian matrix ${\bf\Omega}_{k}\in\mathbb{R}^{n\times b}$ 5: ${\bf Q}_{k}=\text{qr}({\bf A\Omega}_{k}-{\bf Q}({\bf B\Omega}_{k}))$$C_{\text{mul}}mnb+(k-1)C_{\text{mul}}(m+n)b^{2}+C_{\text{qr}}mb^{2}$ 6: for $j=1:p$ do 7: $\tilde{{\bf Q}}_{k}=\text{qr}({\bf A}^{T}{\bf Q}_{k}-{\bf B}^{T}({\bf Q}^{T}{\bf Q}_{k}))$$\text{\textemdash}^{\prime\prime}\text{\textemdash}+\text{\textemdash\textemdash\textemdash\textemdash\textemdash}^{\prime\prime}\text{\textemdash\textemdash\textemdash\textemdash\textemdash}+C_{\text{qr}}nb^{2}$ 8: ${\bf Q}_{k}=\text{qr}({\bf A}\tilde{{\bf Q}}_{k}-{\bf Q}({\bf B}\tilde{{\bf Q}}_{k}))$$\text{\textemdash}^{\prime\prime}\text{\textemdash}+\text{\textemdash\textemdash\textemdash\textemdash\textemdash}^{\prime\prime}\text{\textemdash\textemdash\textemdash\textemdash\textemdash}+C_{\text{qr}}mb^{2}$ 9: end for 10: ${\bf Q}_{k}=\text{qr}({\bf Q}_{k}-{\bf Q}({\bf Q}^{T}{\bf Q}_{k}))$$2(k-1)C_{\text{mul}}mb^{2}+C_{\text{qr}}mb^{2}$ 11: ${\bf B}_{k}={\bf Q}_{k}^{T}{\bf A}$$C_{\text{mul}}mnb$ 12: ${\bf Q}=[{\bf Q},\,{\bf Q}_{k}]$ 13: ${\bf B}=\begin{bmatrix}{\bf B}^{T},\,{\bf B}_{k}^{T}\end{bmatrix}^{T}$ 14: $E=E-\|{\bf B}_{k}\|_{F}^{2}$ 15: if $E<\tau^{2}$ then stop 16: end for Suppose that we stop Algorithm 3 after $t$ iterations, and set $\ell=tb$. The runtime of randQB_EI can then be approximated as (4) $\displaystyle\begin{split}T_{\texttt{randQB\\_EI}}&\approx 2C_{\text{mul}}mn\ell+\frac{1}{2}C_{\text{mul}}(3m+n)\ell^{2}+\frac{2}{t}C_{\text{qr}}m\ell^{2}\\\ &\ \ +p\left(2C_{\text{mul}}mn\ell+C_{\text{mul}}(m+n)\ell^{2}+\frac{1}{t}C_{\text{qr}}(m+n)\ell^{2}\right),\end{split}$ where the cost increases more or less proportionally to $p+1$. By comparison, the cost of the fixed-rank prototype algorithm randQB can be approximated as (5) $T_{\texttt{randQB}}\approx 2(p+1)C_{\text{mul}}mn\ell+C_{\text{qr}}m\ell^{2}.$ ### 2.2 Block Lanczos bidiagonalization Here we describe a block Lanczos method for reducing a matrix to block bidiagonal form. Since this method generalizes the single-vector algorithm by Golub and Kahan [6] commonly known as the Golub-Kahan-Lanczos process, we will abbreviate it as bGKL. The bGKL process was introduced by Golub, Luk, and Overton [8] to find the largest singular values and associated singular vectors of a large and sparse matrix. Since then, it has been applied to both least squares problems [14, 27] and total least squares problems [2, 13] with multiple right-hand sides. The process takes a matrix ${\bf A}\in\mathbb{R}^{m\times n}$ and matrix ${\bf V}_{1}\in\mathbb{R}^{n\times b}$ with orthonormal columns, and after $k$ steps produces the orthonormal bases ${\bf U}_{(k)}=[{\bf U}_{1},\cdots,{\bf U}_{k}]$ and ${\bf V}_{(k+1)}=[{\bf V}_{1},\cdots,{\bf V}_{k+1}]$ satisfying $\displaystyle\operatorname*{Span}\left\\{{\bf U}_{(k)}\right\\}$ $\displaystyle=\operatorname*{Span}\left\\{{\bf A}{\bf V}_{1},{\bf A}({\bf A}^{T}{\bf A}){\bf V}_{1},\ldots,{\bf A}({\bf A}^{T}{\bf A})^{k-1}{\bf V}_{1}\right\\},$ $\displaystyle\operatorname*{Span}\left\\{{\bf V}_{(k+1)}\right\\}$ $\displaystyle=\operatorname*{Span}\left\\{{\bf V}_{1},({\bf A}^{T}{\bf A}){\bf V}_{1},\ldots,({\bf A}^{T}{\bf A})^{k}{\bf V}_{1}\right\\}.$ Furthermore, it produces the $kb\times(k+1)b$ block bidiagonal matrix (6) ${\bf B}_{k}=\begin{bmatrix}{\bf R}_{1}&{\bf L}_{2}&&&\\\ &{\bf R}_{2}&\ddots&&\\\ &&\ddots&{\bf L}_{k}&\\\ &&&{\bf R}_{k}&{\bf L}_{k+1}\end{bmatrix}$ so that at each step of the process the relations (7) ${\bf AV}_{(k)}={\bf U}_{(k)}{\bf B}_{k}(:,1:kb)\quad\text{and}\quad{\bf A}^{T}{\bf U}_{(k)}={\bf V}_{(k+1)}{\bf B}_{k}^{T}$ are satisfied. Assuming no loss of rank, the blocks $\\{{\bf R}_{i}\\}_{i=1}^{k}$ or $\\{{\bf L}_{i}\\}_{i=1}^{k+1}$ are respectively $b\times b$ upper and lower triangular. Algorithm 4 Block Lanczos bidiagonalization process (bGKL) [8] 0: ${\bf A}\in\mathbb{R}^{m\times n}$, matrix ${\bf V}_{1}\in\mathbb{R}^{n\times b}$ with orthonormal columns 1: ${\bf U}_{0}={\bf 0}$; ${\bf L}_{1}={\bf 0}$(Approximate costs) 2: for $k=1,2,\ldots$ do 3: ${\bf U}_{k}{\bf R}_{k}=\text{qr}({\bf A}{\bf V}_{k}-{\bf U}_{k-1}{\bf L}_{k})$$C_{\text{mul}}mnb+\frac{1}{2}C_{\text{mul}}mb^{2}+C_{\text{qr}}mb^{2}$ 4: ${\bf V}_{k+1}{\bf L}_{k+1}^{T}=\text{qr}({\bf A}^{T}{\bf U}_{k}-{\bf V}_{k}{\bf R}_{k}^{T})$$C_{\text{mul}}mnb+\frac{1}{2}C_{\text{mul}}nb^{2}+C_{\text{qr}}nb^{2}$ 5: end for The basic outline of the process is given in Algorithm 4, where the costs assume no loss of rank in the blocks $\\{{\bf R}_{i}\\}_{i=1}^{k}$ or $\\{{\bf L}_{i}\\}_{i=1}^{k+1}$. We note that the original algorithm in [8] is organized so that ${\bf B}_{k}$ is square at the end of each iteration. Our current presentation more directly mimics the ${\bf QB}$ factorization, since ${\bf U}_{(k)}{\bf B}_{k}{\bf V}_{(k+1)}^{T}={\bf U}_{(k)}{\bf U}_{(k)}^{T}{\bf A}$ by the second relation in (7). It follows that in exact arithmetic the identity (8) $\|{\bf A}-{\bf U}_{(k)}{\bf B}_{k}{\bf V}_{(k+1)}^{T}\|_{F}^{2}=\|{\bf A}\|_{F}^{2}-\|{\bf B}_{k}\|_{F}^{2}$ will hold, and so the bGKL process can be readily adapted to find a fixed- accuracy approximation to ${\bf A}$. Suppose that we stop the process after $t$ iterations and set $\ell=tb$. The runtime of the bGKL process can then be approximated as (9) $T_{\texttt{bGKL}}\approx 2C_{\text{mul}}mn\ell+\frac{1}{2t}C_{\text{mul}}(m+n)\ell^{2}+\frac{1}{t}C_{\text{qr}}(m+n)\ell^{2}.$ At this point, it is not fair to compare this cost to the cost of (4) because we have not yet accounted for the cost of reorthogonalization in bGKL, which is necessary for stability. Nonetheless, it suggests that we may be able to obtain an algorithm based on bGKL that costs no more per iteration than randQB_EI with power parameter $p=0$. ## 3 Implementation details In this section we discuss how to handle several important issues in the implementation of our fixed-accuracy algorithm. The first concerns the difficulty the Lanczos method encounters when ${\bf A}$ has large singular value clusters. The second is the matter of ensuring that the columns of ${\bf U}_{(k)}$ and ${\bf V}_{(k)}$ remain close to orthonormal, and the third is the use of deflation and augmentation when the blocks ${\bf R}_{k}$ or ${\bf L}_{k}$ are rank-deficient. ### 3.1 Block size and singular value clusters It is known that if ${\bf A}$ has a singular value with multiplicity greater than the block size $b$, then in exact arithmetic the block Lanczos process will recover at most $b$ of those singular values. More generally, if the spectrum of ${\bf A}$ has a cluster of size greater than $b$ then the approximate singular vectors recovered by the Lanczos process may converge slowly. This behavior stands in stark contrast to that of blocked subspace iteration methods such as randQB_EI, whose outputs do not in exact arithmetic depend on $b$. For the first situation—singular values with multiplicity greater than $b$—classical results tend to examine a restricted problem. Saad [23] notes that the Lanczos process would simply behave as though it were being performed on a restricted matrix ${\bf A}|_{{\bf S}}$ whose singular values111Strictly speaking, Saad’s analysis is for block Lanczos tridiagonalization applied to a symmetric matrix as opposed to Lanczos bidiagonalization applied a rectangular matrix. Our focus is on bidiagonalization, but the two processes are closely related. had multiplicity at most $b$. There is therefore “no loss of generality” in assuming that the singular values of ${\bf A}$ have multiplicity bounded by $b$ for the purpose of analyzing convergence rates. Other more recent works restrict their attention to the case where the cluster size is bounded by $b$ [16], or where $b$ is greater than or equal to the target rank $r$ [19, 28, 4]. The analysis of Yuan, Gu, and Li [32] makes an important advancement by allowing for cluster sizes (though not multiplicity) greater than $b$, and showing that even within a large cluster the recovered singular values will converge superlinearly in the number of Lanczos iterations. Their numerical experiments on real-world data suggest that smaller block sizes generally lead to faster convergence with respect to the number of flops expended. As it turns out, even singular values with multiplicity greater than $b$ are not fatal to the Lanczos process. Parlett [21] notes that since “rounding errors introduce components in all directions”, even repeated singular vectors222See footnote 1. will eventually be found. Simon and Zha [25] add the caveat that the singular vectors will not converge in consecutive order: the Lanczos process will likely find several smaller singular values of ${\bf A}$ before it finds copies of the larger repeated ones. What we should expect in practice is that a singular value of multiplicity greater than $b$ (or a cluster of comparable size) will delay convergence, but not prevent it entirely. Thus in spite of complications in the analysis of the block Lanczos method, using a smaller block size can be quite effective in practice. Even when ${\bf A}$ has clusters larger than the block size, we can obtain a good approximation simply by increasing the number of Lanczos iterations. Our numerical experiments support this notion: although we can construct synthetic examples for which randUBV is inferior to methods that use subspace iteration, our algorithm performs quite well on a real-world example with large clusters. #### 3.1.1 Adaptive block size An alternate method for dealing with clusters is offered in [30] and explored further in [1, 33]: instead of keeping the block size constant, we may periodically augment the block Krylov space with new vectors in order to better approximate clusters. The rough idea would be to monitor the singular values of ${\bf B}_{k}$, and to increase the block size $b$ so that it remains larger than the largest cluster in ${\bf B}_{k}$. For the sake of keeping the implementation of our algorithm simple, we leave this extension for future exploration. ### 3.2 One-sided reorthogonalization In exact arithmetic, the matrices ${\bf U}_{(k)}$ and ${\bf V}_{(k)}$ will have orthonormal columns. In practice, they will quickly lose orthogonality due to roundoff error, and so we must take additional steps to mitigate this loss of orthogonality. For the single-vector case $b=1$, Simon and Zha [25] observe that it may suffice to reorthogonalize only one of ${\bf U}_{(k)}$ or ${\bf V}_{(k)}$ in order to obtain a good low-rank approximation. They suggest that if the columns of ${\bf V}_{(k)}$ alone are kept close to orthonormal, then ${\bf U}_{(k)}{\bf B}_{k}{\bf V}_{(k+1)}^{T}$ will remain a good approximation to ${\bf A}$ regardless of the orthogonality of ${\bf U}_{(k)}$. Separately, experiments by Fong and Saunders [5] in the context of least-squares problems suggest that keeping ${\bf V}_{(k)}$ orthonormal to machine precision $\epsilon_{\text{mach}}$ might be enough to keep ${\bf U}_{(k)}$ orthonormal to at least $\mathcal{O}(\sqrt{\epsilon_{\text{mach}}})$, at least until the least-squares solver reaches a relative backward error of $\sqrt{\epsilon_{\text{mach}}}$. For the sake of computational efficiency, we therefore choose to explicitly reorthogonalize ${\bf V}_{(k)}$ but not ${\bf U}_{(k)}$ (assuming that $m\geq n$). Reorthogonalization can take up a significant portion of the runtime of our algorithm, particularly if ${\bf A}$ is sparse. However, it is known for the Lanczos process that orthogonality is lost only in the direction of singular vectors that have already converged [20]. Thus in a high-quality implementation, it should be possible to save time by orthogonalizing each block ${\bf V}_{k}$ against a smaller carefully chosen set of vectors obtained from ${\bf V}_{(k-1)}$ (see [21, 10, 24] for a few such proposals). In our implementation, we use full reorthogonalization for simplicity. We note that even if ${\bf A}$ is square, full reorthogonalization will cost no more than the equivalent step in randQB_EI (line 10 of Algorithm 3). ### 3.3 Deflation In practice, the block Lanczos process may yield blocks ${\bf R}_{k}$ or ${\bf L}_{k}$ that are rank-deficient or nearly so. Here and with other block Krylov methods, it is typical to reduce the block size $b$ in response so that ${\bf R}_{k}$ and ${\bf L}_{k}$ retain full row rank and column rank, respectively. This process is known as deflation. For more background, we refer the reader to the survey paper by Gutknecht [11] and the references therein. In the context of solving systems with multiple right-hand sides, Gutknecht stresses that deflation is highly desirable. Indeed, when solving a system such as ${\bf AX}={\bf B}$, it is precisely the dimension reduction resulting from deflation that gives block methods an advantage over methods that solve each right hand side separately. In this context, deflation might occur if ${\bf B}$ is itself rank-deficient, or if ${\bf B}$ has some notable rank structure in relation to the matrix ${\bf A}$. When running block Lanczos with a randomly chosen starting matrix ${\bf V}_{1}$ (i.e., ${\bf V}_{1}=\text{qr}({\bf\Omega})$ and ${\bf\Omega}$ is a standard Gaussian matrix), we do not expect deflation to occur frequently since ${\bf\Omega}$ is not likely to have any notable structure with respect to ${\bf A}$. Nonetheless, a reliable implementation should be prepared for the possibility, and so we examine the details here. Björck [2] proposes computing the QR factorizations in lines 3–4 of Algorithm 4 using Householder reflections without column pivoting. The resulting matrix ${\bf B}_{k}$ will be not just block bidiagonal, but a banded matrix whose effective bandwidth begins at $b$ and decreases with each deflation. Hnětynková et al. [13] refer to ${\bf B}_{k}$ as a $b$-wedge shaped matrix. If the effective bandwidth decreases to zero, the bidiagonalization process will terminate. Algorithm 5 Deflated QR (deflQR) 0: ${\bf X}\in\mathbb{R}^{m\times n}$, deflation tolerance $\delta$ 0: ${\bf Q}\in\mathbb{R}^{m\times s}$ with orthonormal columns, ${\bf R}\in\mathbb{R}^{s\times n}$, rank $s$ 1: Compute the pivoted QR factorization ${\bf X}{\bf\Pi}=\widehat{{\bf Q}}\widehat{{\bf R}}$ 2: Find the largest $s$ such that $|\widehat{{\bf R}}(s,s)|\geq\delta$ 3: ${\bf R}=\widehat{{\bf R}}(1:s,:){\bf\Pi}^{T}$ 4: ${\bf Q}=\widehat{{\bf Q}}(:,1:s)$ We propose to instead use QR with column pivoting, which is slower and less elegant but simpler to implement in terms of readily available subroutines. The procedure is outlined in Algorithm 5, where the deflation tolerance $\delta$ is presumably somewhat larger than $\epsilon_{\text{mach}}\|{\bf A}\|_{2}$. Lines 3–4 of Algorithm 4 would use this modified routine in place of unpivoted QR, and as Björck [2] notes the recurrence in those lines will still work in the presence of deflation. ### 3.4 Augmentation When using block Lanczos to solve systems of linear equations, deflation can be highly beneficial. In the context of matrix sketching, it is less desirable. Consider an extreme example where the columns of ${\bf V}_{1}$ are right singular vectors of ${\bf A}$: the Lanczos process will terminate after a single iteration, returning an approximation of the form ${\bf A}\approx{\bf U}_{1}{\bf\Sigma}{\bf V}_{1}^{T}$. Termination at this point would yield accurate singular vectors, but the factorization may not approximate ${\bf A}$ to within the desired error tolerance. As mentioned before, we do not expect deflation to occur frequently if ${\bf V}_{1}$ is chosen randomly. However, if we do not make any further adjustments for deflation our algorithm would fail to converge on cases as simple as ${\bf A}={\bf I}$. In order to make our method more robust, we will replace any deflated vectors with new randomly drawn ones in order to keep the block column size constant. Similar augmentation techniques have been proposed to prevent breakdown in the case of the nonsymmetric Lanczos process [29] and GMRES [22]. More specifically, if Algorithm 5 returns a factorization ${\bf V}_{k}{\bf L}_{k}^{T}$ with rank less than $b$, we generate a standard Gaussian matrix ${\bf\Omega}_{k}$ so that $[{\bf V}_{k},\ {\bf\Omega}_{k}]$ has $b$ columns. We then orthogonalize ${\bf\Omega}_{k}$ against ${\bf V}_{k}$ and ${\bf V}_{(k-1)}$, obtaining ${\bf V}_{k}^{\prime}$. The resulting matrix $[{\bf V}_{k},{\bf V}_{k}^{\prime}]$ is then used in place of ${\bf V}_{k}$ in the next step of the Lanczos process. In keeping with the spirit of one-sided reorthogonalization, we do not augment ${\bf U}_{k}$ if a block ${\bf R}_{k}$ is found to be rank deficient. This will allow us to avoid accessing the matrix ${\bf U}_{(k-1)}$ while the block Lanczos process is running. As a consequence, the blocks of ${\bf B}_{k}$ will each have $b$ columns, but some may have fewer than $b$ rows. We observe that in the presence of augmentation, the space $\text{Span}\left\\{{\bf V}_{(k)}\right\\}$ will not be a block Krylov space, but will instead be the sum of multiple block Krylov spaces with different dimensions. As of the time of writing we are not aware of any convergence results for this more general case. ## 4 Fixed-accuracy algorithm Algorithm 6 presents code for randUBV. Ignoring the augmentation step in line 16, the cost is more or less equal to the cost of bGKL plus the cost of reorthogonalizing ${\bf V}_{k+1}$ in Line 11. Thus if we stop the process after $t$ iterations and set $\ell=tb$, the total cost is approximately (10) $T_{\texttt{randUBV}}\approx 2C_{\text{mul}}mn\ell+C_{\text{mul}}n\ell^{2}+\frac{1}{2t}C_{\text{mul}}(m+n)\ell^{2}+\frac{1}{t}C_{\text{qr}}(m+n)\ell^{2}.$ Comparing this quantity to (4), we see that randUBV requires fewer floating points operations than randQB_EI when run for the same number of iterations, even when the latter is run with power parameter $p=0$. In particular, the cost of one-sided reorthogonalization is only $\mathcal{O}(n\ell^{2})$ while the stabilization steps in lines 5 and 10 of randQB_EI cost $\mathcal{O}((m+n)\ell^{2})$. We can therefore expect that if ${\bf A}$ is sparse and $m\gg n$, randUBV may run significantly faster. Algorithm 6 Blocked Bidiagonalization algorithm (randUBV) 0: ${\bf A}\in\mathbb{R}^{m\times n}$, block size $b$, relative error $\tau$, deflation tolerance $\delta$ 0: ${\bf U}$, ${\bf B}$, ${\bf V}$, such that $\|{\bf A}-{\bf UBV}^{T}\|_{F}<\tau$ 1: $E=\|{\bf A}\|_{F}^{2}$(Approximate costs) 2: Draw a random standard Gaussian matrix ${\bf\Omega}\in\mathbb{R}^{n\times b}$ 3: ${\bf V}_{1}=\text{qr}({\bf\Omega})$$C_{\text{qr}}nb^{2}$ 4: ${\bf U}_{1}={\bf 0}$; ${\bf L}_{1}={\bf 0}$ 5: ${\bf V}={\bf V}_{1}$; ${\bf U}={\bf U}_{1}$ 6: for $k=1,2,3,\ldots$ do 7: $[{\bf U}_{k},{\bf R}_{k}]=\texttt{deflQR}({\bf A}{\bf V}_{k}-{\bf U}_{k-1}{\bf L}_{k},\delta)$$C_{\text{mul}}mnb+\frac{1}{2}C_{\text{mul}}mb^{2}+C_{\text{qrcp}}mb^{2}$ 8: ${\bf U}=[{\bf U},{\bf U}_{k}]$ 9: $E=E-\|{\bf R}_{k}\|_{F}^{2}$ 10: ${\bf V}_{k+1}={\bf A}^{T}{\bf U}_{k}-{\bf V}_{k}{\bf R}_{k}^{T}$ $C_{\text{mul}}mnb+\frac{1}{2}C_{\text{mul}}nb^{2}$ 11: ${\bf V}_{k+1}={\bf V}_{k+1}-{\bf V}({\bf V}^{T}{\bf V}_{k+1})$$2kC_{\text{mul}}nb^{2}$ 12: $[{\bf V}_{k+1},{\bf L}_{k+1}^{T},s]=\texttt{deflQR}({\bf V}_{k+1},\delta)$$C_{\text{qrcp}}nb^{2}$ 13: ${\bf V}=[{\bf V},{\bf V}_{k+1}]$ 14: if $s<b$ then 15: Draw a random standard Gaussian matrix ${\bf\Omega}_{k}\in\mathbb{R}^{n\times(b-s)}$ 16: ${\bf V}_{k+1}^{\prime}=\text{qr}({\bf\Omega}_{k}-{\bf V}({\bf V}^{T}{\bf\Omega}_{k}))$$2kC_{\text{mul}}nb(b-s)+C_{\text{qr}}n(b-s)^{2}$ 17: ${\bf V}=[{\bf V},{\bf V}_{k+1}^{\prime}]$ 18: end if 19: $E=E-\|{\bf L}_{k+1}\|_{F}^{2}$ 20: if $E<\tau^{2}\|{\bf A}\|_{F}^{2}$ then stop 21: end for Since our focus is on the fixed-accuracy algorithm, however, different algorithms (and for randQB_EI, different power parameters $p$) will converge after different numbers of iterations. We must therefore consider not just the cost per iteration, but how quickly the approximations converge. We discuss this matter further along with the numerical experiments in section 5. ### 4.1 Approximation accuracy It is noted in [31] that due to cancellation, the computed value of $E=\|{\bf A}\|_{F}^{2}-\|{\bf B}\|_{F}^{2}$ may be inaccurate when $E$ is very small. In order to estimate the error $E$ to within a relative tolerance of $\gamma$ (say, $\gamma=1\%$), the authors suggest that the absolute accuracy tolerance $\tau$ for the QB factorization should be set large enough to satisfy (11) $\tau>\sqrt{E}\geq\sqrt{\frac{4\epsilon_{\text{mach}}}{\gamma}}\|{\bf A}\|_{F},$ where $\epsilon_{\text{mach}}$ is the machine precision. In short, the proposed method of error estimation cannot reliably estimate a relative error below $2\sqrt{\epsilon_{\text{mach}}}$. We provide a similar analysis in order to account for deflation and loss of orthogonality of ${\bf U}_{(k)}$. In particular, we show that the error estimate can remain accurate even as ${\bf U}_{(k)}$ loses orthogonality in practice. To that end, we define the local loss of orthogonality of a matrix as follows: ###### Definition 4.1. Given a matrix ${\bf U}_{(k)}=[{\bf U}_{1},\ldots,{\bf U}_{k}]$, the local loss of orthogonality of ${\bf U}_{(k)}$ is defined as $\varepsilon_{k}=\max\left\\{\max_{1\leq i\leq k}\|{\bf U}_{i}^{T}{\bf U}_{i}-{\bf I}\|_{2},\ \max_{2\leq i\leq k}\|{\bf U}_{i-1}^{T}{\bf U}_{i}\|_{2}\right\\}$ The main idea is that we do not require $\|{\bf U}_{(k)}^{T}{\bf U}_{(k)}-{\bf I}\|_{2}$ to be small. Instead, we need only the milder condition that adjacent blocks be close to orthogonal. This idea bears some resemblance to the work [26], which uses local recurrence formulas to show that certain error estimates for the conjugate gradient method remain accurate in a finite precision setting. ###### Lemma 4.2. Consider the matrix ${\bf U}_{(k)}=[{\bf U}_{1},\ldots,{\bf U}_{k}]$, and let $\varepsilon_{k}$ denote the local loss of orthogonality of ${\bf U}_{(k)}$. Let ${\bf B}_{k}$ be a block upper bidiagonal matrix whose blocks are partitioned conformally with those of ${\bf U}_{(k)}$. Then $\|{\bf U}_{(k)}{\bf B}_{k}\|_{F}^{2}=(1+\theta)\|{\bf B}_{k}\|_{F}^{2},\quad|\theta|\leq 2\varepsilon_{k}.$ ###### Proof 4.3. We will find the squared Frobenius norm of ${\bf U}_{(k)}{\bf B}_{k}$ one block column at a time, and use the fact that since ${\bf B}_{k}$ is block bidiagonal, each block column in the product uses at most two adjacent blocks of ${\bf U}_{(k)}$. Let $\\{{\bf R}_{i}\\}_{i=1}^{k}$ denote the blocks on the main block diagonal of ${\bf B}_{k}$, and let $\\{{\bf L}_{i}\\}_{i=2}^{k+1}$ denote the off- diagonal blocks. Then for $2\leq i\leq k$, the squared Frobenius norm of the $i$-th block column of ${\bf U}_{(k)}{\bf B}_{k}$ is given by (12) $\|{\bf U}_{i-1}{\bf L}_{i}+{\bf U}_{i}{\bf R}_{i}\|_{F}^{2}=\|{\bf U}_{i-1}{\bf L}_{i}\|_{F}^{2}+\|{\bf U}_{i}{\bf R}_{i}\|_{F}^{2}+2\operatorname*{tr}\left({\bf R}_{i}^{T}{\bf U}_{i}^{T}{\bf U}_{i-1}{\bf L}_{i}\right).$ Examining the first term, it can be seen that $\displaystyle\|{\bf U}_{i-1}{\bf L}_{i}\|_{F}^{2}$ $\displaystyle=\operatorname*{tr}({\bf L}_{i}^{T}{\bf U}_{i-1}^{T}{\bf U}_{i-1}{\bf L}_{i})$ $\displaystyle=\operatorname*{tr}({\bf L}_{i}^{T}({\bf U}_{i-1}^{T}{\bf U}_{i-1}-{\bf I}){\bf L}_{i})+\operatorname*{tr}({\bf L}_{i}^{T}{\bf L}_{i})$ $\displaystyle=(1+\theta_{1})\|{\bf L}_{i}\|_{F}^{2},$ where $|\theta_{1}|\leq\varepsilon_{k}$. A similar result applies to the term $\|{\bf U}_{i}{\bf R}_{i}\|_{F}^{2}$. As for the final term, we find that $\displaystyle 2|\operatorname*{tr}{\bf R}_{i}^{T}{\bf U}_{i}^{T}{\bf U}_{i-1}{\bf L}_{i}|$ $\displaystyle\leq 2\|{\bf U}_{i}^{T}{\bf U}_{i-1}\|_{2}\|{\bf R}_{i}\|_{F}\|{\bf L}_{i}\|_{F}$ $\displaystyle\leq 2\varepsilon_{k}\|{\bf R}_{i}\|_{F}\|{\bf L}_{i}\|_{F},$ $\displaystyle\leq\varepsilon_{k}(\|{\bf R}_{i}\|_{F}^{2}+\|{\bf L}_{i}\|_{F}^{2}).$ By adding these expressions back together we arrive at the bound (13) $\|{\bf U}_{i-1}{\bf L}_{i}+{\bf U}_{i}{\bf R}_{i}\|_{F}^{2}=(1+\theta)(\|{\bf R}_{i}\|_{F}^{2}+\|{\bf L}_{i}\|_{F}^{2}),\quad|\theta|\leq 2\varepsilon_{k},$ so the desired relative error bound holds for each block column (the first and last columns may be checked separately). The main claim then follows by summing over the block columns. Next, we observe that with one-sided reorthogonalization of ${\bf V}_{(k)}$ and in the absence of deflation, the first relation in (7) will remain accurate to machine precision regardless of the orthogonality of ${\bf U}_{(k)}$ (as noted in [25], the second relation will not). In the presence of deflation, the first relation must be amended slightly. We rewrite it as (14) ${\bf A}{\bf V}_{(k)}={\bf U}_{(k)}{\bf B}_{k}^{\prime}+{\bf D}_{k},$ where ${\bf B}_{k}^{\prime}$ is shorthand for ${\bf B}_{k}(:,1:kb)$ and ${\bf D}_{k}$ is a matrix accounting for all deflations in ${\bf U}_{(k)}$. Assuming the column pivoting in Algorithm 5 selects at each step the column with the largest 2-norm, it can be verified that $\|{\bf D}_{k}\|_{F}\leq\delta\sqrt{d}$, where $\delta$ is the deflation tolerance and $d$ is the total number of columns that have been removed from ${\bf U}_{(k)}$ through deflation. We now show that the error estimate $E=\|{\bf A}\|_{F}^{2}-\|{\bf B}_{k}\|_{F}^{2}$ will remain accurate up to terms involving the deflation tolerance and the local loss of orthogonality in ${\bf U}_{(k)}$. The proof makes the simplifying assumptions that ${\bf V}_{(k+1)}$ has orthonormal columns and that there is no rounding error term in (14), but accounting for both of these effects will change the bound (15) by at most $\mathcal{O}(\epsilon_{\text{mach}}\|{\bf A}\|_{F}^{2})$. The proof also ignores the effect of cancellation in the computation of $E$, so as with [31] we cannot expect to reliably estimate a relative error below $\sqrt{\epsilon_{\text{mach}}}$. ###### Theorem 4.4. Given a matrix ${\bf A}$, let ${\bf U}_{(k+1)}$, ${\bf B}_{k+1}^{\prime}$, and ${\bf V}_{(k+1)}$ be as produced by Algorithm 6 with deflation tolerance $\delta$. Let $\varepsilon_{k+1}$ denote the local loss of orthogonality of ${\bf U}_{(k+1)}$. Assume that ${\bf V}_{(k+1)}$ has orthonormal columns. Assume that (14) holds exactly at each iteration, and let $d$ be the number of columns removed from ${\bf U}_{(k+1)}$ due to deflation. Finally, let $E=\|{\bf A}\|_{F}^{2}-\|{\bf B}\|_{F}^{2}$. Then (15) $\|{\bf A}-{\bf U}_{(k)}{\bf B}_{k}{\bf V}_{(k+1)}^{T}\|_{F}^{2}\leq E+4\varepsilon_{k+1}\|{\bf A}\|_{F}^{2}+2\delta\sqrt{d}(1+2\varepsilon_{k+1})\|{\bf A}\|_{F}.$ ###### Proof 4.5. First, by assuming the columns of ${\bf V}_{(k+1)}$ are orthonormal we find that (16) $\|{\bf A}-{\bf U}_{(k)}{\bf B}_{k}{\bf V}_{(k+1)}^{T}\|_{F}^{2}=\|{\bf A}\|_{F}^{2}+\|{\bf U}_{(k)}{\bf B}_{k}\|_{F}^{2}-2\operatorname*{tr}({\bf A}{\bf V}_{(k+1)}{\bf B}_{k}^{T}{\bf U}_{(k)}^{T}).$ By assuming that (14) holds exactly at each step, we also get the identity ${\bf A}{\bf V}_{(k+1)}={\bf U}_{(k+1)}{\bf B}_{k+1}^{\prime}+{\bf D}_{k+1}={\bf U}_{(k)}{\bf B}_{k}+[{\bf 0},{\bf U}_{k+1}{\bf R}_{k+1}]+{\bf D}_{k+1},$ where $\|{\bf D}_{k+1}\|_{F}\leq\delta\sqrt{d}$. It follows that (17) $\operatorname*{tr}({\bf A}{\bf V}_{(k+1)}{\bf B}_{k}^{T}{\bf U}_{(k)}^{T})=\|{\bf U}_{(k)}{\bf B}_{k}\|_{F}^{2}+\operatorname*{tr}({\bf U}_{k}^{T}{\bf U}_{k+1}{\bf R}_{k+1}{\bf L}_{k+1}^{T})+\operatorname*{tr}({\bf D}_{k+1}{\bf B}_{k}^{T}{\bf U}_{(k)}^{T}).$ From the definition of $\varepsilon_{k+1}$ we have (18) $\left|\operatorname*{tr}({\bf U}_{k}^{T}{\bf U}_{k+1}{\bf R}_{k+1}{\bf L}_{k+1}^{T})\right|\leq\|{\bf U}_{k}^{T}{\bf U}_{k+1}\|_{2}\|{\bf R}_{k+1}\|_{F}\|{\bf L}_{k+1}\|_{F}\leq\varepsilon_{k+1}\|{\bf A}\|_{F}^{2},$ and since $\|{\bf D}_{k+1}\|_{F}\leq\delta\sqrt{d}$ we also have (19) $\left|\operatorname*{tr}({\bf D}_{k+1}{\bf B}_{k}^{T}{\bf U}_{(k)}^{T})\right|\leq\|{\bf D}_{k+1}\|_{F}\|{\bf U}_{(k)}{\bf B}_{k}\|_{F}\leq\delta\sqrt{d}\|{\bf U}_{(k)}{\bf B}_{k}\|_{F}.$ Lemma 4.2 gives us bounds on $\|{\bf U}_{(k)}{\bf B}_{k}\|_{F}^{2}$, so by returning to (16) and using (17), (18), and (19), we conclude that $\displaystyle\|{\bf A}-{\bf U}_{(k)}{\bf B}_{k}{\bf V}_{(k+1)}^{T}\|_{F}^{2}$ $\displaystyle=\|{\bf A}\|_{F}^{2}+\|{\bf U}_{(k)}{\bf B}_{k}\|_{F}^{2}-2\operatorname*{tr}({\bf A}{\bf V}_{(k+1)}{\bf B}_{k}^{T}{\bf U}_{(k)}^{T})$ $\displaystyle\leq\|{\bf A}\|_{F}^{2}-\|{\bf U}_{(k)}{\bf B}_{k}\|_{F}^{2}+2\varepsilon_{k+1}\|{\bf A}\|_{F}^{2}+2\delta\sqrt{d}\|{\bf U}_{(k)}{\bf B}_{k}\|_{F}$ $\displaystyle\leq E+4\varepsilon_{k+1}\|{\bf A}\|_{F}^{2}+2\delta\sqrt{d}(1+2\varepsilon_{k+1})\|{\bf A}\|_{F}.$ Thus as long as local orthogonality is maintained for ${\bf U}_{(k)}$ and as long as the number of deflations is not too large, we can expect $E$ to remain an accurate estimate of the Frobenius norm approximation error, at least when the error tolerance is not too small. ### 4.2 Postprocessing of ${\bf B}$ Recall that our original goal for the fixed-accuracy problem was not just to find a factorization that satisfies the bound $\|{\bf A}-{\bf UBV}^{T}\|_{F}<\tau$, but to find the factorization with the smallest rank that does so. In order to accomplish this, we may compute the SVD of ${\bf B}$ as ${\bf B}=\hat{{\bf U}}{\bf\Sigma}\hat{{\bf V}}^{T}$, truncate it to the smallest rank $r$ such that $\|{\bf A}-\hat{{\bf U}}_{r}{\bf\Sigma}_{r}\hat{{\bf V}}_{r}^{T}\|_{F}<\tau$, then approximate the left and right singular vectors of ${\bf A}$ by ${\bf U}\hat{{\bf U}}_{r}$ and ${\bf V}\hat{{\bf V}}_{r}$. It should be noted that since ${\bf B}$ is a block bidiagonal matrix, its SVD can in theory be computed more efficiently than if ${\bf B}$ were dense. Algorithms for computing the SVD typically first reduce the matrix to bidiagonal form [6], and ${\bf B}$ can be efficiently reduced to this form using band reduction techniques as in [15]. This postprocessing step takes on additional importance when dealing with the block Lanczos method rather than subspace iteration. Where subspace iteration will yield a matrix ${\bf B}$ whose singular values are all decent approximations of the top singular values of ${\bf A}$, the factor ${\bf B}$ produced by the Lanczos method will contain approximations to the smallest singular values of ${\bf A}$ as well [9]. It is therefore possible that the matrix ${\bf B}$ produced by randUBV can be truncated significantly without diminishing the quality of the approximation. In fact, if one has the goal of obtaining a factorization whose rank is as small as possible, we recommend setting the stopping tolerance $\tau_{\text{stop}}$ slightly smaller than the desired approximation tolerance $\tau_{\text{err}}$ (or similarly, running the algorithm for a few more iterations after the approximation tolerance has already been satisfied). Doing so will may significantly reduce the rank $r$ of the truncated SVD, which will in turn pay dividends by reducing the cost of computing ${\bf U}\hat{{\bf U}}_{r}$ and ${\bf V}\hat{{\bf V}}_{r}$. ## 5 Numerical experiments Here we report the results of numerical experiments on synthetic and real test cases. We run four sets of experiments in order to examine the following: 1. 1. The rate of convergence by iteration. We use synthetic matrices whose spectra decay at different rates, and compare randUBV with randQB_EI using power iterations $p=0,1,2$ for the latter. 2. 2. The effect of sparsity and truncation rank on reorthogonalization costs. 3. 3. The effect of block size on the time and number of iterations required for convergence. 4. 4. The effect of choosing a smaller stopping tolerance $\tau_{\text{stop}}<\tau_{\text{err}}$ on the quality of the approximation. All experiments were carried out in MATLAB 2020b on a 4-core Intel Core 7 with 32GB RAM. (a) Left: slow decay. Right: very slow decay. (b) Left: fast decay. Right: singular values have multiplicity greater than the block size. Figure 1: Convergence rate by iteration. In all cases but the last, randUBV requires fewer iterations for convergence than randQB_EI with $p=0$ but more than randQB_EI with $p=1$. ### 5.1 Convergence rate by iteration For our first set of test cases we created matrices of size $2000\times 2000$ with the form ${\bf A}={\bf U\Sigma V}^{T}$, where ${\bf U}$ and ${\bf V}$ were formed by orthogonalizing standard Gaussian matrices and ${\bf\Sigma}$ was set in the following manner: * • (Matrix 1) Slow decay, in which $\sigma_{j}=1/j^{2}$ for $1\leq j\leq 2000$. * • (Matrix 2) Very slow decay, in which $\sigma_{j}=1/j$ for $1\leq j\leq 2000$. * • (Matrix 3) Fast decay, in which $\sigma_{j}=\exp(-j/20)$ for $1\leq j\leq 2000$. * • (Matrix 4) Step function decay, in which $\sigma_{j}=10^{-0.6(\lceil j/30\rceil-1)}$ for $1\leq j\leq 2000$. Each singular value of ${\bf A}$ (except for the smallest) has multiplicity 30. In all four cases, we ran the sketching algorithms to a maximum rank $k=200$ using block size $b=10$. The deflation tolerance was set at $\delta=10^{-12}\sqrt{\|{\bf A}\|_{1}\|{\bf A}\|_{\infty}}$, but we did not encounter deflation in any of these cases. Results are shown in Figure 1. In the first three test cases, the approximation error for randUBV was smaller than that of randQB_EI (with power parameter $p=0$) for every iteration after the first. It lagged somewhat behind randQB_EI with $p=1$ or $p=2$, both of which were quite close to optimal. In the final case, where the singular values of ${\bf A}$ were chosen to have multiplicity larger than the block size, randUBV lagged significantly behind even randQB_EI with $p=0$. We note that algorithm randUBV did nonethless converge, which would not have been possible in exact arithmetic. Finally, we offer a snapshot of the singular values of ${\bf B}_{200}$ after the algorithms have terminated. Results for test cases 1 and 4 are shown in Figure 2. We note that the leading singular values returned by randUBV are more accurate than those returned by randQB_EI with $p=0$ and comparable to the cases $p=1$ or $p=2$. The smallest singular values for randUBV are much smaller than their randQB counterparts, which appears to be undesirable but has a bit of a silver lining: it suggests that the rank of ${\bf B}_{k}$ can be truncated without losing much approximation accuracy. Figure 2: Singular values of ${\bf B}_{k}$ after termination. Left: slow decay. Right: step function decay. ### 5.2 Reorthogonalization costs For our second set of test cases we generated random sparse matrices as A = sprand(m,n,d) with $n=4000$ columns and varying numbers of rows $m$ and densities $d$. We then approximated A to a variable rank $k$ using randUBV and randQB_EI with $p=0$. We tested three different variations: * • Number of rows $m$ varying from $8000$ to $40000$, rank $k=600$, and $d=0.8\%$ nonzeros. * • Number of rows $m=24000$, rank $k$ varying from $200$ to $1000$, and $d=0.8\%$ nonzeros. * • Number of rows $m=24000$, rank $k=600$, and nonzeros varying from $d=0.4\%$ to $d=2\%$. Figure 3: Effects of sparsity (left) and approximation rank (right) on run time. Results for the second and third cases are shown in Figure 3, which confirm our general expectations: for a rectangular matrix with $m>n$, if the matrix is sparse or the approximation rank large then reorthogonalization will take up a larger proportion of the overall cost. Consequently, randUBV will gain a competitive advantage over randQB_EI due to the fact that it uses one-sided reorthogonalization. This effect will be more pronounced the larger $m$ is compared to $n$, although we found that changing $m$ alone did not have much effect on the relative runtimes of the two algorithms. ### 5.3 Block size For our third set of test cases, we examine how the choice of block size affects the time and number of iterations required for convergence. We use one synthetic matrix and two real ones: the synthetic matrix is a $4000\times 4000$ matrix whose singular values decrease according to the step function $\sigma_{j}=10^{-0.1(\lceil j/30\rceil-1)}$. Thus each singular value except for the last has multiplicity 30. The first real matrix is a dense $3168\times 4752$ matrix, representing the grayscale image of a spruce pine. The second, lp_cre_b, comes from a linear programming problem from the SuiteSparse collection [3], and is a $9648\times 77137$ sparse matrix with $260,785$ nonzero elements and at most 9 nonzero elements per column. This second matrix has several sizeable clusters of singular values: for example, $\sigma_{268}\approx 71.10$ and $\sigma_{383}\approx 70.77$. The median relative gap $(\sigma_{k}-\sigma_{k+1})/\sigma_{k+1}$ among the first 800 singular values is about $8.6\times 10^{-5}$, and the smallest relative gap is about $2.3\times 10^{-8}$. Prior to running the sketching algorithms, both matrices were transposed in order to have more rows than columns. Figure 4: Left: image of pinus glabra. Right: leading singular values of lp_cre_b. We compare randUBV to randQB_EI with power parameter $p=1$. For both algorithms we approximate the synthetic matrix to a relative error $\tau_{\text{err}}=0.01$, the grayscale image to a relative error $\tau_{\text{err}}=0.1$, and the SuiteSparse matrix to a relative error $\tau_{\text{err}}=0.5$. Results are shown in Figure 5. The behavior of randQB_EI was fairly straightforward: using larger block sizes was more efficient, at least up to the point where the block size was large enough to waste computation by computing ${\bf Q}$ and ${\bf B}$ to a larger rank than necessary. This makes sense because larger block sizes offer more opportunities for using BLAS 3 operations and parallelization. Relatedly, we note that MATLAB’s svdsketch function adaptively increases the block size in order to accelerate convergence. The behavior of randUBV was very similar to that of randQB_EI on the grayscale image, but less so on the other two cases. For the synthetic matrix whose singular values were distributed according to a step function, increasing $b$ from just below the cluster size to just above it led to a sharp drop in both the time and number of iterations required. On the matrix lp_cre_b, the optimal block size was near $b=10$ even though the approximation rank was close to constant over all block sizes tested. We speculate that the reason for this is that lp_cre_b is both sparse and rectangular, so dense QR operations are a significant portion of the cost of the algorithm. Looking back to the cost of randUBV as shown in (10), we note that using a smaller block size reduces the cost of performing QR operations on ${\bf U}$. (a) Step function decay. (b) Grayscale image. (c) SuiteSparse matrix lp_cre_b. Figure 5: Effect of block size the time and number of iterations required for convergence. ### 5.4 Stopping tolerance In our final set of experiments we examined the effect of choosing a stopping tolerance $\tau_{\text{stop}}$ smaller than the desired approximation error tolerance $\tau_{\text{err}}$, with the conjecture that doing so would allow randUBV to attain significantly better compression rates. We used randQB_EI with $p=0,1,2$ as a reference for comparison. The procedure went as follows: in the first step, each sketching algorithm was run until the Frobenius norm approximation error dropped below a set tolerance $\tau_{\text{stop}}$. In the second step, the SVD of ${\bf B}$ was then computed and truncated as ${\bf B}_{r}={\bf U}_{r}{\bf\Sigma}_{r}{\bf V}_{r}^{T}$ to the smallest rank such that $\|{\bf A}-{\bf B}_{r}\|_{F}\leq\tau_{\text{err}}\|{\bf A}\|_{F}$, and the singular vectors of ${\bf A}$ computed as ${\bf U}{\bf U}_{r}$ and ${\bf V}{\bf V}_{r}$ (or as ${\bf Q}{\bf U}_{r}$ for randQB_EI). The time required for each of these two stages was recorded using tic and toc. Method | $\tau_{\text{stop}}$ | $t_{\text{fac}}$ | $t_{\text{svd}}$ | $t_{\text{total}}$ | $k$ | $r$ ---|---|---|---|---|---|--- SVD | – | – | 13.52 | 13.52 | – | 388 UBV | 0.1 | 0.68 | 0.08 | 0.76 | 520 | 439 UBV | 0.09 | 0.87 | 0.11 | 0.98 | 600 | 392 QB(P=0) | 0.1 | 1.22 | 0.21 | 1.44 | 700 | 663 QB(P=1) | 0.1 | 1.12 | 0.09 | 1.22 | 440 | 420 QB(P=2) | 0.1 | 1.55 | 0.08 | 1.63 | 420 | 398 Figure 6: Results for image data with approximation tolerance $\tau_{\text{err}}=0.1$. #### 5.4.1 Image data For the image data, we ran all algorithms to a relative error of $\tau_{\text{stop}}=\tau_{\text{err}}=0.1$ with block size $b=20$, and for randUBV additionally considered the stricter stopping tolerance $\tau_{\text{stop}}=0.09$. Results are shown in Figure 6, with all time reported in seconds. There, $t_{\text{fac}}$ is the time required for the QB or UBV factorization, $t_{\text{svd}}$ is the time required to compute the SVD of ${\bf B}$ and the new singular vectors of ${\bf A}$, and $t_{\text{total}}=t_{\text{fac}}+t_{\text{svd}}$. Finally, $k$ is the rank at which the algorithm was terminated, and $r$ the rank to which ${\bf B}$ was truncated. The first line represents the time required to directly compute the SVD of ${\bf A}$ and the optimal truncation rank. We observe that randUBV ran faster than randQB_EI regardless of the value of the power parameter $p$. Even though it required more iterations to converge than randQB_EI with $p=1$ or $p=2$, it required fewer matrix-vector products with ${\bf A}$ or ${\bf A}^{T}$ per iteration. Furthermore, running randUBV to a stopping tolerance that was slighly smaller than the truncation tolerance took somewhat longer but resulted in nearly optimal compression, even superior to subspace iteration with $p=2$. #### 5.4.2 SuiteSparse data For the matrix lp_cre_b from the SuiteSparse collection, we ran two trials. In the first, we ran all algorithms to the rather modest relative error of $\tau_{\text{stop}}=\tau_{\text{err}}=0.5$, and for randUBV considered the stricter stopping tolerance $\tau_{\text{stop}}=0.45$. In the second, we ran the algorithms to the stricter relative error of $\tau_{\text{stop}}=\tau_{\text{err}}=0.15$, and for randUBV additionally considered $\tau_{\text{stop}}=0.14$. We used block size $b=50$ for both trials. Method | $\tau_{\text{stop}}$ | $t_{\text{fac}}$ | $t_{\text{svd}}$ | $t_{\text{total}}$ | $k$ | $r$ ---|---|---|---|---|---|--- SVD | – | – | – | – | – | 608 UBV | 0.5 | 4.69 | 0.93 | 5.62 | 900 | 747 UBV | 0.45 | 5.68 | 0.99 | 6.67 | 1050 | 627 QB(P=0) | 0.5 | 8.33 | 8.32 | 16.66 | 1150 | 1123 QB(P=1) | 0.5 | 5.16 | 3.69 | 8.85 | 700 | 676 QB(P=2) | 0.5 | 6.54 | 3.21 | 9.75 | 650 | 627 Figure 7: Results for lp_cre_b with approximation tolerance $\tau_{\text{err}}=0.5$. Method | $\tau_{\text{stop}}$ | $t_{\text{fac}}$ | $t_{\text{svd}}$ | $t_{\text{total}}$ | $k$ | $r$ ---|---|---|---|---|---|--- SVD | – | – | – | – | – | 2082 UBV | 0.15 | 21.28 | 12.15 | 33.43 | 2600 | 2293 UBV | 0.14 | 24.09 | 13.88 | 37.98 | 2700 | 2150 QB(P=0) | 0.15 | 72.36 | 63.25 | 135.61 | 3600 | 3505 QB(P=1) | 0.15 | 38.05 | 22.91 | 60.97 | 2150 | 2147 QB(P=2) | 0.15 | 48.00 | 21.59 | 69.59 | 2100 | 2100 Figure 8: Results for lp_cre_b with approximation tolerance $\tau_{\text{err}}=0.15$. Results are shown in Figures 7 and 8, with all time reported in seconds. Due to the size of the matrix ${\bf A}$, we did not attempt to compute its SVD directly but instead found the optimal truncation rank using the precomputed singular values available online [3]. Once again, randUBV ran faster than its subspace-iteration-based counterpart, and using a slightly smaller stopping tolerance $\tau_{\text{stop}}$ improved the compression ratio without significantly increasing the runtime. The iteration $k$ at which randUBV terminated was significantly smaller than it was for randQB_EI with $p=0$, but significantly larger than for randQB_EI with $p=1$ or $p=2$ (perhaps in part due to the singular value clusters). It should be noted that the matrix ${\bf A}$ in question is quite sparse with only about $0.03\%$ of its entries nonzero, and fairly skinny with $m\approx 8n$. It is therefore worth exploring whether randQB_EI might save time on reorthogonalization costs if performed on ${\bf A}^{T}$ instead. We re-ran the experiment for $\tau_{\text{err}}=0.15$, and found that while the factorization time $t_{\text{fac}}$ did not change much, the second step $t_{\text{svd}}$ took around twice as long due to the matrix ${\bf B}$ being $k\times m$ rather than $k\times n$. ## 6 Conclusions We have proposed a randomized algorithm randUBV that takes a matrix ${\bf A}$ and uses block Lanczos bidiagonalization to find an approximation of the form ${\bf UBV}^{T}$, where ${\bf U}$ and ${\bf V}$ each have orthonormal columns in exact arithmetic and ${\bf B}$ is a block bidiagonal matrix. For square matrices it costs approximately the same per iteration as randQB-type methods run with power parameter $p=0$ while having better convergence properties. On rectangular matrices, it exploits one-sided reorthognalization to run faster without much degrading the accuracy of the error estimator. Numerical experiments suggest that randUBV is generally competitive with existing randUBV-type methods, at least as long as the problem is not so large that it becomes important to minimize the number of passes over ${\bf A}$. A few avenues for future exploration are suggested. First and most importantly, roundoff error allows block Lanczos methods to handle repeated singular values, which they would be unable to do in exact arithmetic. This fact has been known for decades, but we are not currently aware of any rigorous convergence bounds that account for finite precision. Second, reinflation or any more general method for adaptively changing the block size $b$ will make the span of ${\bf V}$ a sum of Krylov spaces of different dimensions. We are not aware of any convergence results that cover this more general setting. It is also worth exploring just how much the block Lanczos method benefits from oversampling. We have observed that running randUBV for a few more iterations than necessary can result in near-optimal compression, but it would be worthwhile to turn the convergence results of e.g. [32] into practical guidance on how many more iterations are necessary. Finally, the behavior of ${\bf U}$ when using one-sided reorthogonalization merits further study. We generally found that when using a larger stopping tolerance $\tau$ the columns of ${\bf U}$ remained closer to orthonormal. It would be highly desirable to obtain a rigorous result establishing that one- sided reorthogonalization is safe as long as only a rough approximation is required, but we leave this goal for a future work. MATLAB code is available at https://github.com/erhallma/randUBV, including our main algorithm randUBV as well as code used to reproduce the figures used in this paper. ## Acknowledgments The author would like to thank Ilse Ipsen and Arvind Saibaba for their helpful comments on an earlier draft of this paper. ## References * [1] Z. Bai, D. Day, and Q. Ye, ABLE: an adaptive block Lanczos method for non-hermitian eigenvalue problems, SIAM Journal on Matrix Analysis and Applications, 20 (1999), pp. 1060–1082. * [2] A. Björck, Block bidiagonal decomposition and least square problems, Perspectives in numerical Analysis, Helsinki, (2008). * [3] T. A. Davis and Y. Hu, The University of Florida sparse matrix collection, 38 (2011), https://doi.org/10.1145/2049662.2049663. * [4] P. Drineas, I. C. Ipsen, E.-M. Kontopoulou, and M. Magdon-Ismail, Structural convergence results for approximation of dominant subspaces from block Krylov spaces, SIAM Journal on Matrix Analysis and Applications, 39 (2018), pp. 567–586. * [5] D. C.-L. Fong and M. Saunders, LSMR: An iterative algorithm for sparse least-squares problems, SIAM Journal on Scientific Computing, 33 (2011), pp. 2950–2971. * [6] G. Golub and W. Kahan, Calculating the singular values and pseudo-inverse of a matrix, Journal of the Society for Industrial and Applied Mathematics, Series B: Numerical Analysis, 2 (1965), pp. 205–224. * [7] G. Golub, R. Underwood, and J. Wilkinson, The Lanczos algorithm for the symmetric Ax= $\lambda$Bx problem, tech. report, Report STAN-CS-72-270, Department of Computer Science, Stanford U. Stanford …, 1972\. * [8] G. H. Golub, F. T. Luk, and M. L. Overton, A block Lanczos method for computing the singular values and corresponding singular vectors of a matrix, ACM Transactions on Mathematical Software (TOMS), 7 (1981), pp. 149–169. * [9] G. H. Golub and C. F. Van Loan, Matrix Computations, The Johns Hopkins University Press, Baltimore, 4th ed., 2013. * [10] J. F. Grcar, Analyses of the Lanczos Algorithm and of the Approximation Problem in Richardson’s Method., PhD thesis, University of Illinois at Urbana-Champaign, 1982. * [11] M. H. Gutknecht, Block Krylov space methods for linear systems with multiple right-hand sides: an introduction, 2006. * [12] N. Halko, P. G. Martinsson, and J. A. Tropp, Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions, SIAM Review, 53 (2011), pp. 217–288. * [13] I. Hnětynková, M. Plešinger, and Z. Strakoš, Band generalization of the Golub–Kahan bidiagonalization, generalized Jacobi matrices, and the core problem, SIAM Journal on Matrix Analysis and Applications, 36 (2015), pp. 417–434. * [14] S. Karimi and F. Toutounian, The block least squares method for solving nonsymmetric linear systems with multiple right-hand sides, Applied Mathematics and Computation, 177 (2006), pp. 852–862. * [15] L. Kaufman, Band reduction algorithms revisited, ACM Transactions on Mathematical Software (TOMS), 26 (2000), pp. 551–567. * [16] R.-C. Li and L.-H. Zhang, Convergence of the block Lanczos method for eigenvalue clusters, Numerische Mathematik, 131 (2015), pp. 83–113. * [17] P.-G. Martinsson and S. Voronin, A randomized blocked algorithm for efficiently computing rank-revealing factorizations of matrices, SIAM Journal on Scientific Computing, 38 (2016), pp. S485–S507. * [18] MATLAB, version 9.9.0 (R2020b), The MathWorks Inc., Natick, Massachusetts, 2020. * [19] C. Musco and C. Musco, Randomized block Krylov methods for stronger and faster approximate singular value decomposition, in Advances in Neural Information Processing Systems, 2015, pp. 1396–1404. * [20] C. C. Paige, The computation of eigenvalues and eigenvectors of very large sparse matrices., PhD thesis, University of London, 1971. * [21] B. N. Parlett and D. S. Scott, The Lanczos algorithm with selective orthogonalization, Mathematics of computation, 33 (1979), pp. 217–238. * [22] L. Reichel and Q. Ye, Breakdown-free GMRES for singular systems, SIAM Journal on Matrix Analysis and Applications, 26 (2005), pp. 1001–1021. * [23] Y. Saad, On the rates of convergence of the Lanczos and the block-Lanczos methods, SIAM Journal on Numerical Analysis, 17 (1980), pp. 687–706. * [24] H. D. Simon, The Lanczos algorithm with partial reorthogonalization, Mathematics of computation, 42 (1984), pp. 115–142. * [25] H. D. Simon and H. Zha, Low-rank matrix approximation using the Lanczos bidiagonalization process with applications, SIAM Journal on Scientific Computing, 21 (2000), pp. 2257–2274. * [26] Z. Strakoš and P. Tichỳ, On error estimation in the conjugate gradient method and why it works in finite precision computations., ETNA. Electronic Transactions on Numerical Analysis [electronic only], 13 (2002), pp. 56–80. * [27] F. Toutounian and M. Mojarrab, The block LSMR method: a novel efficient algorithm for solving non-symmetric linear systems with multiple right-hand sides, Iranian Journal of Science and Technology (Sciences), 39 (2015), pp. 69–78. * [28] S. Wang, Z. Zhang, and T. Zhang, Improved analyses of the randomized power method and block Lanczos method, arXiv preprint arXiv:1508.06429, (2015). * [29] Q. Ye, A breakdown-free variation of the nonsymmetric Lanczos algorithms, Mathematics of Computation, 62 (1994), pp. 179–207. * [30] Q. Ye, An adaptive block Lanczos algorithm, Numerical Algorithms, 12 (1996), pp. 97–110. * [31] W. Yu, Y. Gu, and Y. Li, Efficient randomized algorithms for the fixed-precision low-rank matrix approximation, SIAM Journal on Matrix Analysis and Applications, 39 (2018), pp. 1339–1359. * [32] Q. Yuan, M. Gu, and B. Li, Superlinear convergence of randomized block Lanczos algorithm, in 2018 IEEE International Conference on Data Mining (ICDM), IEEE, 2018, pp. 1404–1409. * [33] Y. Zhou and Y. Saad, Block Krylov–Schur method for large symmetric eigenvalue problems, Numerical Algorithms, 47 (2008), pp. 341–359.
8k
arxiv_papers
2101.01250
# Force probe simulations using an adaptive resolution scheme Marco Oestereich Jürgen Gauss Gregor Diezemann ###### Abstract Molecular simulations of the forced unfolding and refolding of biomolecules or molecular complexes allow to gain important kinetic, structural and thermodynamic information about the folding process and the underlying energy landscape. In force probe molecular dynamics (FPMD) simulations, one pulls one end of the molecule with a constant velocity in order to induce the relevant conformational transitions. Since the extended configuration of the system has to fit into the simulation box together with the solvent such simulations are very time consuming. Here, we apply a hybrid scheme in which the solute is treated with atomistic resolution and the solvent molecules far away from the solute are described in a coarse-grained manner. We use the adaptive resolution scheme (AdResS) that has very successfully been applied to various examples of equilibrium simulations. We perform FPMD simulations using AdResS on a well studied system, a dimer formed from mechanically interlocked calixarene capsules. The results of the multiscale simulations are compared to all-atom simulations of the identical system and we observe that the size of the region in which atomistic resolution is required depends on the pulling velocity, i.e. the particular non-equilibrium situation. For large pulling velocities a larger all atom region is required. Our results show that multiscale simulations can be applied also in the strong non-equilibrium situations that the system experiences in FPMD simulations. Keywords: Force probe simulations, coarse graining, hybrid simulations ## I. Introduction Force spectroscopy is a standard experimental technique to investigate unfolding pathways, details of the energy landscape and the mechanical properties of single biomolecules and molecular complexes[1, 2, 3]. Usually, one end of the molecule is fixed in space and an external force is applied to the other end. This force is either constant (force clamp, FC) or changes linearly in time (force ramp, FR). In the first case one observes the extension of the system as a function of time while in the FR protocol the force measured at the pulling device is recorded as a function of the extension[4]. The information extracted from these types of experiments not only allows to determine (un)folding rates, but also important properties of the folding landscape like the position of transition states, transition path times or the existence of stable intermediates[5, 6]. Molecular dynamics (MD) simulations are routinely used to investigate conformational transitions in soft matter systems and in particular the folding and unfolding of biomolecules like peptides, proteins or RNA[7, 8]. The mechanical properties of biological systems can be studied with atomistic resolution using the techniques of FPMD simulations (also called steered MD simulations)[9, 10]. In most cases, however, there is a gap of up to five orders of magnitude in the time scales of such simulations and experimental realizations of force spectroscopy[11]. Only recently it has become possible to match the time scales of FPMD simulations and experiments performed by using a high-speed atomic force microscope to study the unbinding of a streptavidin - biotin complex[12]. In general, however, it is challenging to reach the experimentally relevant long time scales using atomistic FPMD simulations. One reason lies in the need of a rather large simulation box and the resulting large number of solvent molecules. Additionally, a large number of (un)folding trajectories are required to allow for a meaningful statistical analysis of the results. In order to speed up FPMD simulations, some of the well established techniques of coarse graining have successfully been applied to study the mechanical folding pathways of proteins and of RNA[13, 14]. However, by using these techniques that employ simplified interaction potentials and reduced numbers of particles the details of the formation and rupture of noncovalent bonds cannot be studied with atomistic resolution. Markov State Models (MSMs) allow to study the kinetics of conformational transitions on long time scales using dynamical information from short atomistic simulation runs[15, 16] and they have successfully been applied to extent the dynamical range of FPMD simulations[17, 18]. Also methods that are developed directly to increase the efficiency of FPMD simulations are available[19, 20]. In most cases the primary interest of FPMD simulations lies in the study of the mechanical (un)folding kinetics of the solute and the dynamics of the solvent molecules only plays a minor role. Therefore, mixed resolution schemes that treat the solute in an all-atom (AA) manner and the solvent in a coarse- grained (CC) way should be applicable. There are different methods to set up mixed resolution schemes. One hybrid method that is particularly well suited to treat systems in which no exchange between particles treated with different resolution takes place uses the definition of virtual interaction sites[21]. These virtual sites are positioned at the center of mass of a group of atoms in the AA part of the system. The CG forces acting on the virtual sites are then distributed uniformly among the neighboring atoms to achieve the coupling between the different parts of the system. We have applied this methodology to the special non-equilibrium situation encountered in FPMD simulations and we have found that the scheme is applicable in principle but the accuracy is not comparable to the one in equilibrium situations[22]. Other approaches to enable simulations with mixed resolution are (among others) the method developed by Izvekov and Voth[23] and the adaptive resolution scheme (AdResS) developed by Kremer, Delle Site and others[24, 25]. In the present paper, we use the latter methodology for a CG description of the solvent allowing the solute to be described in an AA manner in FPMD simulations. The AdResS is based on a partitioning of the simulation box into a region with AA resolution, one with CG resolution and a crossover or hybrid region which allows for particle exchange between the regions of different resolution. Thus, in our study, the AA region consists of the solute and some surrounding solvent molecules while the solvent that is not in the immediate neighborhood of the solute is treated in a CG manner. As in FPMD simulations the system is strongly driven out of thermal equilibrium, it is not clear a priori that methods developed for equilibrium simulations are also applicable in these situations. As in earlier investigations, we will perform FPMD simulations using a calix[4]arene catenane dimer in mesitylene solvent as a model system. Apart from the mentioned hybrid simulations, we have investigated this system in AA simulations and have found that its reversible unfolding kinetics can well be understood in terms of a simple two-state model[26, 27, 28]. In equilibrium, the two calix[4]arene ”cups” form a complex stabilized by a ring of 16 hydrogen bonds (H-bonds). Complete dissociation of the cups is prevented by a set of four intertwined aliphatic loops consisting of 14 methylene groups each. We will use mesitylene as a solvent because due to its aprotic nature it does not form H-bonds that interfere with the intramolecular H-bonds between the two calix[4]arene monomers. The paper is organized as follows. In the next Section, the computational details are presented including a brief recapitulation of the AdResS methodology and the presentation of results of equilibrium AdResS simulations of the calix[4]arene dimer system. We then compare the results of FPMD simulations performed employing AdResS to those of AA simulations and close with some concluding remarks. ## II. Computational Methodology ### 1\. All-atom simulations All AA simulations were performed using the GROMACS 2018.4 program package employing the OPLS-AA force field[29, 30, 31]. We used a stochastic dynamics integrator [32] at a temperature of 298 K with a friction constant of 0.1 ps. All bonds were constrained using the LINCS algorithm[33] allowing for a time step of 2 fs. Short range electrostatic and van der Waals interactions were computed using a cut-off of 1.2 nm. The long range Coulomb interactions was treated using the reaction field method with a relative dielectric constant of 2.4. For the van der Waals interactions, we applied a dispersion correction[34]. For the AA simulations the neighbor list was updated after 25 simulation steps and for the CG simulations a pair list with a cutoff of 1.37 nm was used. We used Cartesian periodic boundary conditions in all simulations. We performed an energy minimization starting with a (7.5 nm)3 cubic simulation box containing one calix[4]arene catenane dimer and 1780 mesitylene molecules, cf. Fig.1. Figure 1: a) Chemical structure of the calix[4]arene catenane dimer with the H-bonds indicated. Blue: Urea-Urea (UU) bonds stabilizing the closed structure; Green: Urea-Ether (UE) bonds stabilizing the open structure. The methyl groups at the narrow rim of the calixarene cups are omitted. b) Stick model of the calix[4]arene dimer along with the definition of the reference group and the pulled group used in the setup of the FPMD simulations. These groups are defined as the center of mass of the methoxy-carbon atoms at the narrow rim of one calix[4]arene monomer. The reference group is fixed in space and the pulled group is moved along the vector connecting the two goups using a harmonic potential. The end-to-end distance $r_{\rm ee}$ is defined as the distance between the two groups. The aliphatic loops are omitted for clarity. After this, the system was equilibrated in the canonical ensemble for 200 ps. Then it was coupled to a Berendsen barostat [35] with a time constant of 0.5 ps and an isothermal compressibility of $\rm 8.26\cdot 10^{-5}$ bar-1. The box size was determined to be (7.49 nm)3 for a pressure of 1 bar and this was used in all AA simulations. All production runs were performed in the canonical ensemble using this box size. We mention that this box size is larger than the box size we used in earlier investigations ((5.8 nm)3)[36, 37]. However, the larger box size used here is also used for the AdResS simulations and therefore a direct comparison is possible. ### 2\. Coarse-grained potentials The CG potentials for the solvent molecules were computed using the iterative Boltzmann inversion (IBI) method[38]. IBI relates the free-energy of a pair of particles to the logarithm of the radial distribution function (RDF) to obtain a potential of mean force (PMF) as a function of the distance $r$ between the particles[38, 39]. Starting from the expression for the PMF, $U(r)=-k_{B}T\ln(g(r))$, with $g(r)$ denoting the RDF and $k_{B}$ the Boltzmann constant, one obtains the effective pair potential iteratively. The initial PMF is estimated from the reference RDF, $U(r)^{CG}_{0}=-k_{B}T\ln(g(r)_{ref})$ and the iteration cycle is defined by $U(r)^{CG}_{i+1}=U(r)^{CG}_{i}+k_{B}T\ln\left({g(r)_{i}\over g(r)_{ref}}\right)$ (1) The mesitylene molecules were treated as spheres and the RDF of the center of mass was used in the determination of the PMF. In the iterative scheme we used a cut-off for the potential of 1.2 nm and each simulation run had a duration of 200 ps (the first 20 ps were omitted for equilibration). The effective pair potential was smoothed using cubic splines and we iterated the procedure 325 times without pressure correction. The simulations were performed using the program package VOTCA[40, 41] (VOTCA 1.3.) and the reference RDF was obtained from an AA simulation (25 ns) of a pure mesitylene system in a box with size (7.49 nm)3. The results for the RDF are presented in Fig.2. Figure 2: RDF for the center of mass of the mesitylene molecules for different steps in the IBI procedure of the calculation of the CG pair potential. The inset shows the root mean square deviation of the RDF $g(r)_{i}$ relative to the reference RDF $g(r)_{ref}$, cf. eq.(1) Without showing the results here, we note that the resulting PMF is of a purely repulsive nature and can approximately be described by an inverse power law potential with an exponent in the range of 12, cf. the Supporting Information. It is well known that coarse graining speeds up the dynamics relative to AA simulations[42, 43]. We quantified this effect by a measurement of the diffusion coefficients which are given by $D_{\rm CG}=11.8\cdot 10^{-10}$m2s-1 and $D_{\rm AA}=1.7\cdot 10^{-10}$m2s-1. Note that these values differ from the corresponding ones given in ref.[22] ($D_{\rm CG}=8.8\cdot 10^{-10}$m2s-1 and $D_{\rm AA}=7.2\cdot 10^{-10}$m2s-1). We attribute this to the different box sizes, different treatment of the long range electrostatic interactions and different integrators. ### 3\. Adaptive resolution scheme For the AdResS methodology it is important to balance the chemical potential in the AA region and the CG region. This is accomplished by the computation of the thermodynamic force that is calculated iteratively according to ${\bf F}^{TD}_{i+1}({\bf x})={\bf F}^{TD}_{i}({\bf x})-{M\over\kappa_{T}\rho_{ref}^{2}}{\bf\nabla}\rho_{i}({\bf x})$ (2) and works on the CG part in the hybrid region allowing the exchange of molecules between the different regions[25, 44]. Here, $M$ is the mass of the molecules, $\kappa_{T}$ is a constant conceptionally related to the isothermal compressibility and $\rho_{i}({\bf x})$ the density in the $i$th iteration step. The iteration starts with the initial density and a vanishing thermodynamic force. We have used a spherical setting with different radii of the AA region and a constant slab thickness of the hybrid region of $s_{\rm Hy}=1.2$ nm. The range of the thermodynamic force was enlarged by 0.2 nm in order to incorporate molecules that are only partly placed in the hybrid region. We varied the radius of the AA region, $r_{\rm AA}$, in order to study the dependence of the results on this choice. The values used are $r_{\rm AA}=1.6$, $0.8$ and $0.4$ nm. Since the average end-to-end distance in equilibrium is about $1.4$ nm, this means that for the smallest value of $r_{\rm AA}$ parts of the calix[4]arene dimer are located in the hybrid region. The center of the calix[4]arene dimer was always kept in the center of the simulation box ensuring that the distance between the pulled group and the reference group to the border of the CG region stayed the same. We used a force capping methodology in the hybrid region [45] with a maximum force of $F_{max}=2.5\cdot 10^{5}$ kJ/(molK). This ensures that the forces between two particles at the border between the CG region and the hybrid region do not increase too much. Force capping is applied because the CG potential depends only on the distance of the center of mass of the mesitylene molecules and is independent of the relative orientation of the molecules. Therefore, the distance between parts of the molecules can become very small. All simulations had a duration of 1 ns with the first 400 ps omitted due to equilibration. We performed 88 iterations for $r_{\rm AA}=1.6$ nm, 149 for $r_{\rm AA}=0.8$ nm and 244 for $r_{\rm AA}=0.4$ nm in order to obtain flat density profiles throughout the simulation box. (For the smaller AA regions a flat density profile in the hybrid region is more important.) In Fig.3, we plot the density as a function of the distance from the center of the simulation box which coincides with the center of mass of the calix[4]arene dimer. Figure 3: Relative density of the mesitylene solvent as a function of the distance from the center, $r_{\rm center}$, of the simulation box. The vertical lines represent the boundaries of the AA region and the hybrid region. Since the calix[4]arene dimer resides in the center of the box, the solvent density vanishes for small values of $r_{\rm center}$. For larger distances, the density follows the one of the AA simulation to a very good approximation. Due to the structure of the calix[4]arene dimer the distribution of the solvent density in the immediate neighborhood of the solute is not isotropic. This is the reason for the appearence of the hump like structure for $r_{\rm center}\sim 1$ nm. For some further information regarding the thermodynamic force we refer to the Supporting Information. ### 4\. Equilibrium simulations The AdResS is well known to reproduce the AA results in equilibrium simulations for a number of different situations[25]. In order to assure that it also works for our particular system, we performed equilibrium simulations and monitored the most important structural features. In Fig.4 (a) we show the end-to-end distance $r_{\rm ee}$ as a function of the simulation time and its probability distribution for a 50 ns AA simulation. We additionally present the distributions for AdResS simulations using various values for the radius of the AA region, $r_{\rm AA}$, in Fig.4 (b). Figure 4: a) Left: Time evolution of the end-to-end distance from an AA simulation; Right: Probability distribution of the observed values of $r_{\rm ee}$. b) Probability distributions for different radii of the AA region in AdResS simulations. The full lines represent fits to Gaussians. It can be seen that the average value of $r_{\rm ee}$ is almost independent of the size of the AA region. Only for the smallest value of $r_{\rm AA}=0.4$ nm, it differs from the AA value by about 1.5%. From the similarity of the widths of the distributions we conclude that also the fluctuations are very well sampled by the AdResS simulations using the given parameters. We have also monitored the number of UU-bonds stabilizing the closed structure. In all simulations one observes that most of the time (more than 50%) the maximum number of 16 UU-bonds are formed and there are quite frequent fluctuations in which one or two bonds open. However, there are hardly any significant differences between the AA simulation and the AdResS simulations. We thus conclude that the AdResS simulations give a good representation of the AA results. ## III. FPMD simulations ### 1\. Simulation setup All FPMD simulations presented in this work were performed using the FR protocoll. We used two modes, a pull mode and a relax mode where after pulling the dimer into the open conformation the pulling direction is inverted and all other parameters remain the same. The reference group was fixed in space and a harmonic potential was applied to the pulled group, where the groups are defined as in Fig.1. The force measured at the spring is given by $F=K(V\cdot t-z(t))$ (3) Here, K is the spring constant, V the pulling velocity, and $z(t)$ denotes the deviation of the position of the pulled group from its initial value. Note that the extension is defined as $x=V\cdot t$ and the so-called loading rate is given by the product of the force constant and the pulling velocity, $\mu=K\cdot V$. In FPMD simulations the system is driven out of equilibrium and it is pulled through the solvent. In our earlier simulations employing virtual sites[22], the coarse graining of the solvent did not have a dramatic impact on the equilibrium properties of the system, but the rupture forces were significantly reduced relative to those obtained from AA simulations. Therefore, in the present study, we investigate the dependence of the results of AdResS simulations on the size of the AA region, the pulling velocity and the stiffness of the pulling device. As mentioned above, we used a box of size of (7.49 nm)3 for all AA simulations and the AdResS simulations although we have found in earlier studies of the calix[4]arene catenane dimer system that a box length of 5.8 nm is sufficient for all FPMD simulations performed so far[36, 37]. A box of this size appears to be a good compromise between the two extreme scenarios that have to be considered in the case of FPMD simulations. For quasistatic pulling, $r_{\rm ee}$ approximately follows the pulling protocol $r_{\rm ee}\simeq x_{\rm max}$ while for fast pulling one expects $r_{\rm ee}\ll x_{\rm max}$ where $x_{\rm max}$ denotes the extension reached at the end of the pulling simulation. For a typical value of $x_{\rm max}\sim 4$ nm a box of size (5.8 nm)3 is large enough. Such a box contains one dimer and 700 mesitylene molecules with 21 atoms per molecule. In order to estimate the relevant number of atoms in an AdResS simulation, we compute the average number of mesitylene molecules in the respective regions by assuming a homogeneous density for simplicity. For instance, using $r_{\rm AA}=1.6$ nm, we have about 90 mesitylene molecules in the AA region, 380 in the hybrid region and 1315 in the CG region. As a result of this naive estimate, we have to treat explicitly only approximately half of the number of solvent particles as compared to the AA simulations. Using noncubic boxes and nonspherical AA regions in the AdResS simulations this ratio can even further be reduced. Furthermore, adaptive schemes to determine the size of the AA region are also expected to be very effective. However, a direct estimate of the computational efficiency of the AdResS simulations as compared to the AA FPMD simulations is not possible with the present preliminary implementation of AdResS. For a brief presentation of the results of AA simulations, in Fig.5 we show examples of the most important observables as a function of the extension, $x=V\cdot t$, for a pulling simulation with $K=1$ N/m and $V=1$ m/s (i.e. $\mu=1$ N/s). Shown are the end-to-end distance $r_{\rm ee}$ (red), the force measured at the spring attached to the pulled group, $F$, (black) and additionally the number of UU-bonds (blue) and UE-bonds (green). Figure 5: End-to-end distance (red), force (black), number of UU-bonds (blue) and UE-bonds (green) as a function of the extension $V\cdot t$ for a representative AA FPMD simulation in the pull mode. The parameters are $K=1$ N/m and $V=1$ m/s ($\mu=1N/s$). The transition from the closed state to the open state that takes place at an extension of roughly 2.6 nm is observable in all quantities. The end-to-end distance increases almost linearly from the equilibrium value of 1.4 nm to about 1.55 nm and jumps at the transition to 2.1 nm. The force also increases linearly until there is a rip in the force versus extension curve (FEC) after which it increases again. If the molecular energy landscape is assumed to be harmonic, one can extract the corresponding stiffness from the slope of the FECs[46, 28, 36]. It is also evident that the number of UU-bonds (cf. Fig.1) slowly decays for small extensions and then abruptly drops to zero at the transition point. At this point, the UE-bonds stabilizing the open state are formed. Due to the non-equilibrium nature of the pulling procedure the maximum number of 8 UE-bonds is not reached. We mention that the behavior in the relax mode simulations is quite similar for the chosen parameters albeit with a finite hysteresis. For very fast pulling the transition becomes irreversible in the sense that in the relax mode the closed state is no longer reached[27]. ### 2\. Force versus extension curves In Fig.6 we show examples of FECs as obtained from AA simulations and from AdResS simulations with $r_{\rm AA}=1.6$ nm for $K=1$ N/m and $V=1$ m/s. The simulations were always performed in the same way. First a pulling simulation was performed until the extension $V\cdot t$ reached a value of 4 nm and then a relax mode simulation followed. Therefore, for large extensions the two curves are on top of each other. The hysteresis demonstrating the non- equilibrium nature of the FPMD simulations is manifested by the different extensions at which the respective transitions take place. Only for quasistatic pulling, i.e. very small pulling velocities, the results of the pull mode and the relax mode simulations coincide. Figure 6: Examples of FECs for AA simulations and AdResS simulations for $K=1$ N/m and $V=1$ m/s. The results for the AdResS simulations (with $r_{\rm AA}=1.6$ nm) are shifted by 1000 pN for better visibility. The horizontal bars indicate the characteristic forces $F^{\rm min}_{\rm rupt}$, $F^{\rm max}_{\rm rupt}$ (black) and $F^{\rm min}_{\rm rejoin}$, $F^{\rm max}_{\rm rejoin}$ (red). It is evident that the FECs for the AA simulations and the AdResS simulations are very similar for the parameters used and the chosen realization. In order to test the applicability of the AdResS methodology when applied to FPMD simulations in more detail, we used two values for the loading rate, $\mu=1$ N/s and $\mu=10$ N/s. These are quite high values for $\mu$ and therefore the system is driven strongly out of equilibrium. Furthermore, we used different values of the pulling parameters as given in table 1. | $\mu=1.0$N/s | $\mu=10.0$N/s ---|---|--- $K$[N/m] | $V$[m/s] | $V$[m/s] 1.0 | 1.0 | 10.0 2.0 | 0.5 | 5.0 4.0 | 0.25 | 2.5 8.0 | 0.125 | 1.25 Table 1: Combinations of spring constant $K$ and pulling velocity $V$ used in the simulations for varying loading rate $\mu=K\cdot V$. For each set we performed 300 AA simulations and 300 AdResS simulations for $r_{\rm AA}=0.4$ nm, 0.8 nm, 1.6 nm. As the conformational transition from the closed state to the open state and vice versa are stochastic processes, the rupture forces and the rejoin forces will vary in different realizations of the FPMD simulation using the same parameters. Therefore, in order to provide a meaningful statistical analysis of the results, we performed 300 simulations for each set of parameters. We first consider the FECs as they have been shown for one example in Fig.6. In Fig.7 we show averaged FECs[46, 47] obtained from pull mode simulations. The shaded areas are meant to represent the width of the distributions (the second moment). Figure 7: Averaged FECs for AA simulations and AdResS simulations using different values for $r_{\rm AA}$ as indicated. The pulling parameters are $K=1$ N/m and $V=1$ m/s. It is evident that there is a very good agreement between the results of the AA simulations and of the AdResS simulations for $r_{\rm AA}=1.6$ nm. For $r_{\rm AA}=0.8$ nm, the system appears to be softer in the sense that the rupture force is smaller. The slopes of the averaged FECs, however, are the same meaning that the molecular stiffness is unaltered. This changes for the smallest AA region used, $r_{\rm AA}=0.4$ nm. Here, the rupture event appears at still smaller force and has more the form of a broad shoulder than of a rip. However, the most prominent difference to all other simulations is given by the fact that the slope in the open state is smaller than in the other cases and furthermore for large extensions, $x\gtrsim 3.5$ nm the force decreases again indicating that the system becomes unstable. This can be understood from the fact that the end-to-end distance exceeds 2.2 nm for these extensions, cf. Fig.5. Thus, a substantial part of the calix[4]arene dimer enters the hybrid region and the aliphatic loops are destabilized because the relevant interactions are not considered in the AdResS protocol. Due to this failure of AdResS for such a small value of $r_{\rm AA}$ when applied to FPMD simulations we will no longer consider simulations performed using $r_{\rm AA}=0.4$ nm. In Fig.8 we show averaged FECs in the pull mode and the relax mode for various choices of the parameters used in the protocol. Figure 8: Averaged FECs for AA simulations and AdResS simulations for two values of $K$ as indicated and two loading rates. Left panels: $\mu=1$ N/s; Right panels: $\mu=10$ N/s. The general features are well reproduced qualitatively by all AdResS simulations independent of the pulling velocity and the loading rate. The hysteresis shows the known dependence on $V$ and also an increase of the rupture force with the loading rate is observed[46, 36]. In particular, the results of the AA simulations and of the AdResS simulations for $r_{\rm AA}=1.6$ nm coincide very well for all sets of parameters in both modes. The finite size of the AA region becomes important for $r_{\rm AA}=0.8$ nm. In this case, the mean rupture forces are smaller and the mean rejoin forces are somewhat larger than in case of the AA simulations. In case of the pull mode simulations the differences of the mean rupture forces exceed the widths of the distributions. Additionally, the differences apparently are more pronounced for the larger loading rate. ### 3\. Rupture force distributions In order to discuss these findings in more detail, we consider the distributions of the characteristic forces obtained from the individual FECs as shown in Fig.6. In the present paper we will consider the mean force between the maxima and the minima in the transition region for both cases, pull and relax mode FECs, $F_{\rm rupt}={1\over 2}\left(F^{\rm min}_{\rm rupt}+F^{\rm max}_{\rm rupt}\right)\,\mbox{and}\,F_{\rm rejoin}={1\over 2}\left(F^{\rm min}_{\rm rejoin}+F^{\rm max}_{\rm rejoin}\right),$ (4) cf. Fig.6. These forces behave somewhat different from the maximum rupture force and the minimum rejoin force but capture all important features[36, 37]. Furthermore, and more important in the present context, for reversibly binding systems like the calix[4]arene dimer investigated here, $F_{\rm rupt}$ and $F_{\rm rejoin}$ will converge to the equilibrium value, $F_{\rm eq}$, for slow pulling[46]. Fig.9 shows these distributions for the four values of the force constants and the concomittant pulling velocities as listed in table1. Figure 9: Distributions of rupture forces $F_{\rm rupt}$ and rejoin forces $F_{\rm rejoin}$ defined in Eq.(4). The vertical bars indicate the average forces and the dotted lines represent Gaussian fits to the distributions. The mean rupture forces (vertical bars in Fig.9) slightly decrease as a function of the force constant and the mean rejoin forces slightly increase. This indicates that they will merge for vanishing pulling velocity. As pointed out above, the AdResS simulations using $r_{\rm AA}=1.6$ nm yield results almost identical to the AA simulations. In case of the AdReS simulations with $r_{\rm AA}=0.8$ nm the distributions of rupture forces are shifted to smaller forces while the $F_{\rm rejoin}$ distributions tend to be shifted to larger forces. This means, in these AdResS simulations the dimer behaves as if the ”distance to equilibrium” would be smaller than in the AA simulations. This finding is similar to what we have observed in temperature dependent AA simulations for increasing temperature[48]. We mention that the number of simulations performed (300 for each set of parameters) is not large enough for a detailed discussion of the shape and the widths of the distributions. Furthermore, the observed differences appear to depend only weakly on the force constant and more strongly on the loading rate. Only for the slowest pulling velocity of $V=0.125$ m/s the difference in the rupture forces is somewhat smaller than for faster pulling. At the same time the $F_{\rm rejoin}$ distribution becomes rather broad. We interpret this finding as resulting from the fact that for very slow pulling, in the relax mode simulation the system resides in the open state for quite a long time and $r_{\rm ee}$ decreases only slowly. Therefore, the higher solvent mobility in the CG region (and also in the hybrid region) apparently gives rise to some further softening of the dimer. The most prominent features that can be observed in the force distributions are also reflected in the mean forces. In Fig.10, we present the mean rupture and the mean rejoin forces as a function of the pulling velocity. Figure 10: Mean forces $\langle F_{\rm rupt}\rangle$ and $\langle F_{\rm rejoin}\rangle$ as a function of the pulling velocity $V$. Left: $\mu=1$ N/s; Right: $\mu=10$ N/s. An increase of the rupture force and a slight decrease of the rejoin force is observed for both loading rates. The differences of the results for $r_{\rm AA}=0.8$ nm to the other simulation results discussed above are evident and they are somewhat more pronounced for $\mu=10$ N/s than for $\mu=1$ N/s. As mentioned above, the results regarding the differences appear to depend mainly on the loading rate. The average difference in the values of $\langle F_{\rm rupt}\rangle$ is on the order of 400 pN for $\mu=1$ N/s and it is about 650 pN for $\mu=10$ N/s. The differences in the mean rejoin forces is almost the same for both loading rates (approximately 150 pN). The decrease of the rupture force for the AdResS simulations with $r_{\rm AA}=0.8$ nm indicates that the increased solvent mobility ($D_{\rm CG}/D_{\rm AA}\simeq 7$) is important. If we assume that the dynamics of the calix[4]arene dimer is dominated by activated barier crossing, the celebrated Bell model[49] yields a logarithmic dependence on the diffuson coefficient, $F_{\rm rupt}\sim\log{(\mu/D)}$[50]. However, the ratio between the rupture forces should not vary with loading rate. The larger differences for $\mu=10$ N/s possibly are due to the fact that the regime of drift motion might be entered for fast pulling[36]. In this regime the rupture force behaves as $F_{\rm rupt}(V\to\infty)\sim\sqrt{(\mu/D)}$[50, 51] and therefore depends somewhat stronger on $\mu$ than in case of activated dynamics. The situation apparently is somewhat different if the relax mode simulations are considered. The differences in the mean rejoin forces are almost independent of $V$ and of $\mu$. The increased solvent mobility experienced by the dimer in the open state seems to have a smaller impact on the rejoin forces. ### 4\. Analysis of the H-bond network We now discuss some aspects of the dynamics of the H-bond network and some properties of the energy landscape. As mentioned above, the network of UU- bonds stabilizes the closed state and these bonds open at the transition into the open state where the network of UE-bonds is formed, cf. Fig.5. Since the transition is a stochastic process and takes place at different extensions depending on the pulling velocity, the number of H-bonds also strongly depends on the extension. On the other hand, the different conformational states are characterized by the corresponding ranges of the end-to-end distance. Therefore, in the following we consider the mean number of H-bonds as a function of the average end-to-end distance of the dimer, $\langle r_{\rm ee}\rangle$, and present the results in Fig.11. Figure 11: Averaged number of H-bonds (both UU and UE) as a function of $\langle r_{\rm ee}\rangle$. The shaded areas indicate the width of the distributions for the AA simulations. This representation has been shown to depend only weakly on the pulling protocol[28, 36]. In the pull mode, the average number of UU-bonds, $\langle\\#UU\rangle$, decreases from almost 16, the maximum possible number, to zero. The average number of UE-bonds, $\langle\\#UE\rangle$, reaches a maximum at $\langle r_{\rm ee}\rangle\sim 2$ nm and for further stretching they decrease again. A similar behavior is observed in the relax mode simulations with the only difference that the maximum number of UE-bonds formed is somewhat larger than in the pull mode. In all cases, the AdResS simulations using $r_{\rm AA}=1.6$ nm yield results that are in excellent agreement with those obtained from AA simulations. For simulations employing the smaller AA region, $r_{\rm AA}=0.8$ nm, $\langle\\#UU\rangle$ behaves very similar to the other results while $\langle\\#UE\rangle$ exceeds the values obtained from the other simulations in the region of the maximum indicating a slightly higher stability of the open state in this case as it would be expected for a somewhat smaller loading rate. However, these differences still are very small and within the width of the distributions. It is interesting to note that $\langle\\#UU\rangle$ is the same as for the AA simulations also in the relax mode simulations and the enhanced stability of the open state plays almost no role for the structure of the closed state. As in our earlier work on the unfolding kinetics of the calix[4]arene dimer, we use the average number of H-bonds for a characterization of the energy landscape of the system. In particular, we define the minima of the closed state ($q_{C}$), the open state ($q_{O}$) and the transition state ($q_{T}$) in the following way[36]: $q_{C}=\langle r_{\rm ee}\rangle(\langle\\#UU\rangle=8)\,;\,q_{O}=\langle r_{\rm ee}\rangle(\langle\\#UE\rangle=\rm{max})\,;\,q_{T}=\langle r_{\rm ee}\rangle(\langle\\#UU\rangle=\langle\\#UE\rangle)$ (5) and present the results in Fig.12. Figure 12: Locations of the minima and the transition states as defined in Eq.(5) as a function of the force constant; Left panels: $\mu=1$ N/s; Right panels: $\mu=10$ N/s. Upper panels: pull mode; Lower panels: relax mode. The error bars represent the standard deviations of the distributions and the dotted horizontal lines are guides to the eye. For both loading rates, the results are independent of the force constant and coincide within the statistical error. We conclude from these findings that the energy landscape of the dimer is hardly altered in the AdResS simulations. ## IV. Conclusions We have presented a detailed investigation of the applicability of the AdResS methodology to FPMD simulations. As an example we chose a well studied system, a calix[4]arene dimer, that is known to undergo conformational transitions that can be described by a two-state model. Additionally, the system is small enough to allow for the direct comparison with AA simulations. We studied the system for different sizes of the AA region and a fixed slab thickness of the hybrid region. In equilibrium, the AdResS is known to work with excellent results when compared to AA simulations and we confirmed this finding with the simulations of our system consisting of one dimer in mesitylene solvent. Since AdResS has been developed for equilibrium simulations, it is not clear to which extent it can be applied to non-equilibrium situations. We therefore studied the performance of AdResS as applied to FPMD simulations for high pulling velocities and large loading rates. We found that all AdResS simulations using an AA region with a radius of $r_{\rm AA}=1.6$ nm yield results that are basically identical to the results obtained with AA simulations. This holds for all quantities that characterize the kinetics of the conformational transition in our model system and also the structural features like the number of the H-bonds formed in each of the states. It is important to point out that a box size comparable to the AA region in the AdResS simulations is much too small to give reliable results in AA simulations, in particular in the open state of the dimer. The results of AdResS simulations employing a smaller AA region ($r_{\rm AA}=0.8$ nm) worked very well in equilibrium but gave rise to deviations from the AA results in FPMD simulations. This holds in particular when the rupture forces and rejoin forces are considered. Here, the system appears softer and the distance to equilibrium for a given loading rate seems to be reduced. We attribute this to the higher solvent mobility in the hybrid region and in the CG region. The observed differences apparently increase with increasing loading rate meaning that for larger $\mu=K\cdot V$ a larger AA region is required and the results hardly depend on the stiffness $K$ or the pulling velocity $V$ separately. When structural features like the number of H-bonds stabilizing the conformational states of the dimer are considered, the AdResS simulations gave a very good representation of the results of the AA simulations. The same holds for the characteristic features of the energy landscape as deduced from the properties of the H-bond network. The model system considered in the present work is very simple in the sense that the (un)folding pathway is determined by two states only. Furthermore, the solvent is of an aprotic nature and therefore does not interfere with the H-bonds stabilizing the conformations of the dimer. However, the solvent mobility has proven to play a crucial role even in this well defined situation. Therefore, the size of the AA region in applications of the AdResS methodology to FPMD simulations has to be chosen with care. This holds in particular if polar solvents are considered since in that case electrostatic interactions become of major importance. A study of this effect is under way. In conclusion we have demonstrated that the AdResS methodology can be applied successfully to perform FPMD simulations. The number of particles treated in an AA manner can be reduced considerably but the current preliminary implementation of AdResS for FPMD simulations does not allow for a quantitative estimate of the computational efficiency of the methodology. ## Supporting material See the supporting information for some aspects of AdResS, the coarse grained potentials and the thermodynamic force. ## Acknowledgement Financial support by the DFG via TRR 146 is gratefully acknowledged. The authors gratefully acknowledge the computing time granted on the supercomputer Mogon at Johannes Gutenberg University Mainz (hpc.uni-mainz.de). ## References * [1] E. Evans, Annu. Rev. Biophys. Biomol. Struct. 30, 105 (2001). * [2] S. Kumar and M. S. Li, Phys. Rep. 486, 1 (2010). * [3] M. T. Woodside and S. M. Block, Annu. Rev. Biophys. 43, 19 (2014). * [4] G. Žoldák and M. Rief, Curr. Opin. Struct. Biol. 23, 48 (2013). * [5] O. K. Dudko, Q. Rev. Biophys. 49, 1 (2016). * [6] H. S. Chung, J. Mol. Biol. 430, 409 (2018). * [7] S. Bottaro and K. Lindorff-Larsen, Science 361, 355 (2018). * [8] P. S. Georgoulia and N. M. Glykos, Arch. Biochem. Biophys. 664, 76 (2019). * [9] B. Isralewitz, M. Gao, and K. Schulten, Curr. Opin. Struc. Biol. 11, 224 (2001). * [10] M. Sotomayor and K. Schulten, Science 316, 1144 (2007). * [11] F. Franz, C. Daday, and F. Gräter, Curr. Op. Struct. Biol. 61, 132 (2020). * [12] F. Rico, A. Russek, L. Gonzalez, H. Grubmüller, and S. Scheuring, Proc. Nat. Acad. Sci. 116, 6594 (2019). * [13] C. Hyeon and D. Thirumalai, P. Natl. Acad. Sci. USA 102, 6789 (2005). * [14] R. B. Best and G. Hummer, J. Am. Chem. Soc. 130, 3706 (2008). * [15] B. E. Husic and V. S. Pande, J. Am. Chem. Soc. 140, 2386 (2018). * [16] F. Noé and E. Rosta, J. Chem. Phys. 151, 190401 (2019). * [17] S. Ghosh, A. Chatterjee, and S. Bhattacharya, J. Chem. Theory Comput. 13, 957 (2017). * [18] F. Knoch, K. Schäfer, G. Diezemann, and T. Speck, J. Chem. Phys. 148, 044109 (2018). * [19] G. Ozer, E. F. Valeev, S. Quirk, and R. Hernandez, J. Chem. Theory Comput. 6, 3026 (2010). * [20] J. J. Booth and D. V. Shalashilin, J. Phys. Chem. B 120, 700 (2016). * [21] A. J. Rzepiela, M. Louhivuori, C. Peter, and S. J. Marrink, Phys. Chem. Chem. Phys. 13, 10437 (2011). * [22] K. Schäfer, M. Oestereich, J. Gauss, and G. Diezemann, J. Chem. Phys. 147, 134909 (2017). * [23] S. Izvekov and G. A. Voth, J. Chem. Theory Comput. 5, 3232 (2009). * [24] M. Praprotnik, L. Delle Site, and K. Kremer, Phys. Rev. E 73, 197 (2006). * [25] C. Krekeler, A. Agarwal, C. Junghans, M. Praprotnik, and L. Delle Site, J. Chem. Phys. 149, 024104 (2018). * [26] M. Janke, Y. Rudzevich, O. Molokanova, T. Metzroth, I. Mey, G. Diezemann, P. E. Marszalek, J. Gauss, V. Böhmer, and A. Janshoff, Nat. Nanotech. 4, 225 (2009). * [27] T. Schlesier, T. Metzroth, A. Janshoff, J. Gauss, and G. Diezemann, J. Phys. Chem. B 115, 6445 (2011). * [28] T. Schlesier and G. Diezemann, J. Phys. Chem. B 117, 1862 (2013). * [29] B. Hess, C. Kutzner, D. van der Spoel, and E. Lindahl, J. Chem. Theory Comput. 4, 435 (2008). * [30] W. L. Jorgensen and J. Tirado-Rives, J. Am. Chem. Soc. 110, 1657 (1988). * [31] W. L. Jorgensen, D. S. Maxwell, and J. Tirado-Rives, J. Am. Chem. Soc. 118, 11225 (1996). * [32] N. Goga, A. J. Rzepiela, A. H. de Vries, S. J. Marrink, and H. J. C. Berendsen, J. Chem. Theory Comput. 8, 3637 (2012). * [33] B. Hess, H. Bekker, H. Berendsen, and J. Fraajie, J. Comput. Phys. 18, 1463 (1997). * [34] M. Allen and D. Tildesley, Computer Simulations of Liquids, Oxford, Oxford Science Publications, 1987. * [35] H. J. C. Berendsen, J. P. M. Postma, W. F. van Gunsteren, A. DiNola, and J. R. Haak, J. Chem. Phys. 81, 3684 (1984). * [36] S. Jaschonek and G. Diezemann, J. Chem. Phys. 146, 124901 (2017). * [37] S. Jaschonek, K. Schäfer, and G. Diezemann, J. Phys. Chem. B 123, 4688 (2019). * [38] D. Reith, M. Pütz, and F. Müller-Plathe, J. Comput. Chem. 24, 1624 (2003). * [39] M. Hanke, J. Stat. Phys. 170, 536 (2017). * [40] V. Rühle, C. Junghans, A. Lukyanov, K. Kremer, and D. Andrienko, J. Chem. Theory. Comput. 5, 3211 (2009). * [41] S. Y. Mashayak, M. N. Jochum, K. Koschke, N. R. Aluru, V. Rühle, and C. Junghans, PLoS ONE 10, e0131754 (2015). * [42] D. Fritz, K. Koschke, V. A. Harmandaris, N. F. A. van der Vegt, and K. Kremer, Phys. Chem. Chem. Phys. 13, 10412 (2011). * [43] S. Izvekov and G. A. Voth, J. Chem. Phys. 125, 151101 (2006). * [44] S. Fritsch, S. Poblete, C. Junghans, G. Ciccotti, L. Delle Site, and K. Kremer, Phys. Rev. Lett. 108, 170602 (2012). * [45] J. Zavadlav, M. N. Melo, A. V. Cunha, A. H. de Vries, S. J. Marrink, and M. Praprotnik, J. Chem. Theory Comput. 10, 2591 (2014). * [46] G. Diezemann and A. Janshoff, J. Chem. Phys. 129, 084904 (2008). * [47] U. Seifert, Europhys. Lett. 58, 792 (2002). * [48] T. Kato, K. Schäfer, S. Jaschonek, J. Gauss, and G. Diezemann, J. Chem. Phys. 151, 045201 (2019). * [49] G. Bell, Science 200, 618 (1978). * [50] G. Hummer and A. Szabo, Biophys. J. 85, 5 (2003). * [51] J. T. Bullerjahn, S. Sturm, and K. Kroy, Nature Commun. 5, 4463 (2014).
8k
arxiv_papers
2101.01251
# Robust Maximum Entropy Behavior Cloning Mostafa Hussein Cognitive Assistive Robotics Lab University of New Hampshire Durham, NH 03801 [email protected] &Brendan Crowe Department of Statistics University of New Hampshire Durham, NH 03801 [email protected] &Marek Petrik Department of Computer Science University of New Hampshire Durham, NH 03801 [email protected] &Momotaz Begum Cognitive Assistive Robotics Lab University of New Hampshire Durham, NH 03801 [email protected] ###### Abstract Imitation learning (IL) algorithms use expert demonstrations to learn a specific task. Most of the existing approaches assume that all expert demonstrations are reliable and trustworthy, but what if there exist some adversarial demonstrations among the given data-set? This may result in poor decision-making performance. We propose a novel general frame-work to directly generate a policy from demonstrations that autonomously detect the adversarial demonstrations and exclude them from the data set. At the same time, it’s sample, time-efficient, and does not require a simulator. To model such adversarial demonstration we propose a min-max problem that leverages the entropy of the model to assign weights for each demonstration. This allows us to learn the behavior using only the correct demonstrations or a mixture of correct demonstrations. ## 1 Introduction and Related Work Imitation learning IL addresses the problem of learning a policy from demonstrations provided by an expert [5, 17]. As robots become more involved in our daily lives, the ability to program robots and teach them new skills becomes significantly more important. The ability of a robot to effectively learn from demonstrations would greatly increase the quality of robotics applications. A common assumption in most IL approaches is that all expert demonstrations are reliable and trustworthy, but that is not always the case. In this paper we address the problem of adversarial demonstration and how we can detect those demonstrations in any given data-set. Before we go further we want to define what an adversarial demonstration is and why it might exist in a data-set. It is any demonstration that does not follow the optimal policy/policies defined by the task expert. There are two main approaches for IL: inverse reinforcement learning (IRL), where we learn a reward function that the demonstrator is trying to maximize during the task, then generating a policy that maximizes the generated reward [15, 21]. More recent approaches [7, 11], draw a connection between IRL and generative adversarial networks [6, 9] and managed to get better expected return than the classical IRL algorithms. The application of these new techniques in practice is often hindered by the need for millions of samples during training to converge even in the simplest control tasks [13]. The second approach is behavioral cloning (BC), the goal in BC is to learn a mapping between the states and actions as a supervised learning problem [18]. BC is considered conceptually simple and theoretically sound [26]. The main criticism for BC in its current state is the covariate shift [19, 20]. One of the main advantages of BC over IRL is, it does not require a simulator or extra samples during the learning. To be able to deploy a robot and safely use it in our daily lives, we must have the ability to teach the robot new tasks without the need for a simulator to sample from, as well as considering the time efficiency. This feature is only feasible using BC and that is our main reason behind building our approach upon BC. A few works like [27] assume the existence of noisy demonstrations and propose a Bayesian approach to detect them, the authors use a latent variable to assign a weight to each data point in the demonstration set and find these weights using an EM-like algorithm. Criticism of this approach is that they use an assumption over a prior distribution which is mostly task dependent and they can only handle until 10% of the data is random noise, and cannot handle structured adversarial behavior. Other approaches in IRL like [10, 23] use the “failed” demonstration to train the model beside the correct ones, but they assume that these failed demonstrations are given and labeled in the demonstration set. In this paper, we propose a novel robust probabilistic IL frame-work that has the ability to autonomously detect the adversarial demonstrations and exclude it from the training data-set. Robust Maximum ENTropy (RM-ENT), is a frame- work that defines the demonstrated task by constraining the feature expectation matching between the demonstration and the generated model. The feature matching constraint by itself cannot generate a policy and here is where the maximum entropy principles [2, 12] will play the main role in our frame-work. (1) It will choose the model among the task model space that has the maximum entropy; (2) Simultaneously it will analyze the entropy contributed by each demonstration and will set weights to each demonstration that distinguishes between the correct and adversarial ones. We demonstrate that RM-ENT achieves better expected return and robustness than existing IRL and standard BC in classical control tasks in the OpenAi-gym simulator [4]. ## 2 Preliminaries and Base Model We use a tuple $(\mathcal{S},\mathcal{A},\rho_{0})$ to define an infinite horizon Markov process (MDP), where $\mathcal{S}$ represents the state space, $\mathcal{A}$ represents the action space, $\rho_{0}:\mathcal{S}\rightarrow\mathbb{R}$ is the distribution of the initial state $s_{0}$. Let $\pi$ denote a stochastic policy $\pi:\mathcal{S}\times\mathcal{A}\rightarrow[0,1]$ and $\pi_{E}$ denote the expert policy we have from the demonstrations. The expert demonstrations $\mathcal{D}$ are a set of trajectories, each of which consists of a sequence of state-action pairs $\mathcal{D}=(a_{i},s_{i})_{i=1}^{Q}$ where $Q$ is the number of state-action pairs in each demonstration. In most IL algorithms we try to represent the task using a set of features $f_{i}(s,a),i\in\\{1,2,\ldots,n\\}$ that contain enough information to help us solve the IL problem while limiting the complexity of learning. Now comes the most common questions in the IL problem: _What should we match between the expert and the learner?_ Many answers have been introduced among the IL community but the most successful approach until now is the _feature expectation matching (FEM)_ [1, 7, 24, 28]: $\displaystyle\mathbb{E}_{\tilde{\pi}}[f_{i}]$ $\displaystyle=\mathbb{E}_{{\pi}}[f_{i}],i\in\\{1,2,\ldots,n\\}$ (1) $\displaystyle\sum_{s\in\mathcal{S}}\sum_{a\in\mathcal{A}}\tilde{p}(s)\tilde{\pi}(a|s)f_{i}(s,a)$ $\displaystyle=\sum_{s\in\mathcal{S}}\sum_{a\in\mathcal{A}}\tilde{p}(s)\pi(a|s)f_{i}(s,a)$ Where $\tilde{p}$ is the state-action expert distribution while $p$ is the learned model and $\tilde{p}(s)$ is the expert distribution of $s$ in the demonstration set. FEM by itself is an ill-defined problem that cannot generate a policy in the case of BC or a reward function in IRL, since there are many optimal policies that can explain a set of demonstrations, and many rewards that can explain an optimal policy. We use the principles of maximum entropy [12] to solve the ambiguity among the model space where we are looking for the model that had the maximum entropy with the constraint of FEM. $\displaystyle\max_{\pi\in\mathbb{R}^{\mathcal{S}\times\mathcal{A}}}$ $\displaystyle H(\pi)\equiv-\sum_{s\in\mathcal{S}}\sum_{a\in\mathcal{A}}\tilde{p}(s)\pi(a|s)\log\pi(a|s)$ (2) s.t. $\displaystyle\mathbb{E}_{\tilde{\pi}}[f_{i}]-\mathbb{E}_{{\pi}}[f_{i}]=0\quad i=1,\ldots,n$ $\displaystyle\sum_{a\in\mathcal{A}}\pi(a|s)-1=0\quad\forall\ s\in\mathcal{S}$ Using a Lagrange multiplier we can solve this convex problem and get a generalized form for the policy.111A complete derivation can be found in Appendix A. Using the previous formulation we manage to generate a policy using only a few demonstrations because it depends on the feature itself not on how many data points we have, which will be shown in the result section. ## 3 Robust Maximum Entropy Behavior Cloning (RM-ENT) In the previous section, we introduced how to learn the best fit model from our set of demonstrations, but the assumption was that those demonstrations are coming from the expert without any noise or inaccurate trajectories which is not the case in real-life applications. Our goal here is to be able to use only the set of the demonstration that can lead us to the optimal policy and exclude anything else. Now we will introduce how we can add robustness to our model. We will add the $w$ variable which is a weight that is given to each demonstration. The goal is to give the adversarial demonstration the minimum possible weight and to give the correct demonstration a higher weight automatically through the learning. The main hypothesis is coming from maximum entropy principles. The original definition of entropy is the average level of uncertainty inherent in the random variable. So we can say that we are looking for the demonstrations that add the least amount of entropy to the model. We can explain more by saying, if we have an adversarial demonstration it will try to add incorrect, or “random”, information to the model which will increase its entropy. So the goal is to limit this adversarial demonstration by assigning a lower weight to it. At the same time, if two demonstrations add the same amount of information to the model, they should have the same weight. Based on the previous discussion we will introduce these two new notations: $\tilde{p}_{w}(s)=\frac{1}{M}\displaystyle\sum_{d=1}^{D}w_{d}\cdot\tilde{p}(s|d)$ (3a) | | $\tilde{\pi}_{w}(s|a)=\frac{1}{M}\displaystyle\sum_{d=1}^{D}w_{d}\cdot\tilde{\pi}(s,a|d)$ (3b) ---|---|--- Where $D$ is the total number of demonstrations, and $M$ should be $\sum_{d=1}^{D}w_{d}$. Which is the minimum number of demonstrations that we can trust in the given set. By modifying (5) with the new variable $w$ we will get our primal problem as follows: $\displaystyle\min_{w\in\mathbb{R}^{D}}\max_{\pi\in\mathbb{R}^{\mathcal{S}\times\mathcal{A}}}$ $\displaystyle-\sum_{s\in\mathcal{S}}\sum_{a\in\mathcal{A}}\pi(a|s)\log\pi(a|s)\sum_{d=1}^{D}w_{d}\cdot\tilde{p}(s,d)$ (4) $\displaystyle\operatorname{s.\,t.}$ $\displaystyle\sum_{d=1}^{D}w_{d}\sum_{s\in\mathcal{S}}\sum_{a\in\mathcal{A}}f_{i}(s,a)\tilde{p}(s,d)\Big{(}\pi(a|s)-\tilde{\pi}(a|s,d)\Big{)}=0,\quad i=1,\ldots,N\qquad{\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}[\pi]}$ $\displaystyle\sum_{a\in\mathcal{A}}\pi(a|s)-1=0,\quad\forall s\in\mathcal{S}\qquad{\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}[\pi]}$ $\displaystyle\sum_{d=1}^{D}w_{d}=M,\quad w_{d}\geq 0,\quad\forall d\in\mathcal{D},\quad w_{d}\leq 1\quad\forall d=1,\dots,D\qquad{\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color@gray@fill{.5}[w]}$ Using a Lagrange multiplier we can solve this problem.222A complete derivation and more details about the optimization algorithm can be found in Appendix B. ## 4 Experiments and Results ### 4.1 Experiments with Grid world In our first experiment, we used a $5\times 5$ grid world as a toy example where the agent starts from the lower-left grid square and has to make its way to the upper-right grid square. In this experiment we mainly want to study the effect of using a different type of demonstrations and how successful our frame-work is at detecting any adversarial demonstrations. A reminder that our frame-work takes only the demonstrations as an input without any more information about its correctness and generates the policy and at the same time a $w$ weight for each input demonstration. To best show how our algorithm is robust, we used three different types of demonstrations (Correct, adversarial, and random) as shown in Fig.1 . As shown in Table 1 333Can be found in Appendix C. , we can see the three different cases: (1) Using two correct demonstration the algorithm correctly assigns $w=0.5$ for each demo and used both to generate the policy (accuracy = 100 %); (2) In the second case the algorithm assigns $w=0.5$ to the two correct demonstrations and $w=0.0$ to the adversarial demonstrations(accuracy= 83 %); (3) In the third case the algorithm assigns $w=0.5$ to the two correct demonstrations and $w=0.0$ to the random demonstrations (accuracy= 92 %). One last note in cases of using a random demonstrations the frame-work is able to detect those random demonstrations even if the number of correct demonstrations is less, that’s because the entropy is a measurement of the randomness in the model, and the more random actions are taken the higher the entropy will be and it will be easier to detect as shown in Fig.2. (a) Demo. 1 (Correct) (b) Demo. 2 (Correct) (c) Demo. 3 (Adversarial) (d) Demo. 4 (Random) Figure 1: Demonstrations set used in the experiment. Figure 2: 2 is the result of using both of the correct demonstration as a mixture, 2 is the result of using correct demo. and an adversarial demo. , 2 is the result of using correct demo. and a random demo. , 2 is the accuracy using different correct/incorrect ration in case of random and adversarial demonstrations. ### 4.2 Experiments with OpenAI-Gym Simulator (a) (b) (c) Figure 3: Results of Mountain-Car and Acrobot experiments. We run our algorithm on the classical control tasks Mountain-Car [14] and Acrobot [8] in the OpenAi-Gym simulator [4]. Both tasks have a continuous state space and discrete actions. Our main opponent is BC [3], we model $\pi_{BC}$ using a neural network with parameter $\theta_{BC}$ and find these parameters using maximum-likelihood estimation such that $\theta_{BC}=\operatorname*{arg\,max}_{\theta}\prod_{(s,a)\in D}\pi_{BC}(a|s)$ . Also, we compared our algorithm against one of the recent approach in IRL [11] with two different objective function; (1) Linear cost function from [1] (FEM); (2) Game-theoretic apprenticeship learning (GTAL): the algorithm of [11] using the cost function from [25].444More details about the experiment parameter and number of samples can be found in Appendix C.. Fig. 3(a),3(b) shows the performance of different algorithms, under varying numbers of expert and adversarial demonstrations. We can see at the first point that RM-ENT is like BC as we use only correct demonstrations. However, starting from the second point we can see the power of our algorithm as it detects that we have an adversarial demonstration among the data set and remove it (set it’s weight to zero) which will keep our accuracy unchanged. While other algorithms accuracy will decrease due to the adversarial demonstration. At the final point where we have more adversarial demonstration than the correct demonstrations, all the algorithms go to a random-like policy. We compared the time required to train each algorithm. As shown in Fig. 3(c) , RM-ENT requires much less time to converge, the reason for this is the use of neural network to train and run the opponent algorithms. ## 5 Conclusion and Future Work In this work, we presented a novel frame-work that is able to automatically assign the proper weight for each of the given demonstrations and exclude the adversarial ones from the data-set. Our algorithm can achieve superior performance and sample efficiency than BC and IRL approaches in case of the presence of adversarial demonstrations. For future work, it would be enticing to use better optimization approach and extend the frame-work to handle continuous action space. ## References * [1] Pieter Abbeel and Andrew Y Ng. Apprenticeship learning via inverse reinforcement learning. In Proceedings of the International Conference on Machine Learning (ICML), page 1. ACM, 2004. * [2] Shun-ichi Amari. Information geometry and its applications, volume 194. Springer, 2016. * [3] Michael Bain and Claude Sammut. A framework for behavioural cloning. In Machine Intelligence 15, pages 103–129, 1995. * [4] Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. Openai gym. arXiv preprint arXiv:1606.01540, 2016. * [5] Sonia Chernova and Andrea L Thomaz. Robot learning from human teachers. Synthesis Lectures on Artificial Intelligence and Machine Learning, 8(3):1–121, 2014. * [6] Chelsea Finn, Paul Christiano, Pieter Abbeel, and Sergey Levine. A connection between generative adversarial networks, inverse reinforcement learning, and energy-based models. arXiv preprint arXiv:1611.03852, 2016. * [7] Chelsea Finn, Sergey Levine, and Pieter Abbeel. Guided cost learning: Deep inverse optimal control via policy optimization. In International Conference on Machine Learning, pages 49–58, 2016\. * [8] Alborz Geramifard, Christoph Dann, Robert H Klein, William Dabney, and Jonathan P How. Rlpy: a value-function-based reinforcement learning framework for education and research. 2015\. * [9] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural information processing systems, pages 2672–2680, 2014. * [10] Daniel H Grollman and Aude Billard. Donut as i do: Learning from failed demonstrations. In 2011 IEEE International Conference on Robotics and Automation, pages 3804–3809. IEEE, 2011. * [11] Jonathan Ho, Jayesh Gupta, and Stefano Ermon. Model-free imitation learning with policy optimization. In International Conference on Machine Learning, pages 2760–2769, 2016. * [12] Edwin T Jaynes. Information theory and statistical mechanics. Physical review, 106(4):620, 1957. * [13] Ilya Kostrikov, Kumar Krishna Agrawal, Debidatta Dwibedi, Sergey Levine, and Jonathan Tompson. Discriminator-actor-critic: Addressing sample inefficiency and reward bias in adversarial imitation learning. arXiv preprint arXiv:1809.02925, 2018. * [14] Andrew William Moore. Efficient memory-based learning for robot control. 1990\. * [15] Andrew Y Ng, Stuart J Russell, et al. Algorithms for inverse reinforcement learning. In Icml, volume 1, page 2, 2000. * [16] Jorge Nocedal and Stephen Wright. Numerical optimization. Springer Science & Business Media, 2006. * [17] Takayuki Osa, Joni Pajarinen, Gerhard Neumann, J Andrew Bagnell, Pieter Abbeel, and Jan Peters. An algorithmic perspective on imitation learning. arXiv preprint arXiv:1811.06711, 2018. * [18] Dean A Pomerleau. Efficient training of artificial neural networks for autonomous navigation. Neural computation, 3(1):88–97, 1991. * [19] Stéphane Ross and Drew Bagnell. Efficient reductions for imitation learning. In Proceedings of the thirteenth international conference on artificial intelligence and statistics, pages 661–668, 2010. * [20] Stéphane Ross, Geoffrey Gordon, and Drew Bagnell. A reduction of imitation learning and structured prediction to no-regret online learning. In Proceedings of the fourteenth international conference on artificial intelligence and statistics, pages 627–635, 2011. * [21] Stuart Russell. Learning agents for uncertain environments. In Proceedings of the eleventh annual conference on Computational learning theory, pages 101–103, 1998. * [22] John Schulman, Sergey Levine, Pieter Abbeel, Michael Jordan, and Philipp Moritz. Trust region policy optimization. In International conference on machine learning, pages 1889–1897, 2015. * [23] Kyriacos Shiarlis, Joao Messias, and SA Whiteson. Inverse reinforcement learning from failure. 2016\. * [24] Umar Syed, Michael Bowling, and Robert E Schapire. Apprenticeship learning using linear programming. In Proceedings of the 25th international conference on Machine learning, pages 1032–1039. ACM, 2008. * [25] Umar Syed and Robert E Schapire. A game-theoretic approach to apprenticeship learning. In Advances in neural information processing systems, pages 1449–1456, 2008. * [26] Umar Syed and Robert E Schapire. A reduction from apprenticeship learning to classification. In Advances in neural information processing systems, pages 2253–2261, 2010. * [27] Jiangchuan Zheng, Siyuan Liu, and Lionel M Ni. Robust bayesian inverse reinforcement learning with sparse behavior noise. In Twenty-Eighth AAAI Conference on Artificial Intelligence, 2014\. * [28] Brian D Ziebart, Andrew L Maas, J Andrew Bagnell, and Anind K Dey. Maximum entropy inverse reinforcement learning. In Association for the Advancement of Artificial Intelligence (AAAI), volume 8, pages 1433–1438. Chicago, IL, USA, 2008. ## Appendix A Appendix ### A.1 Dual Problem Derivation Starting from the primal problem: $\displaystyle\max_{\pi\in\mathbb{R}^{\mathcal{S}\times\mathcal{A}}}$ $\displaystyle H(\pi)\equiv-\sum_{s\in\mathcal{S}}\sum_{a\in\mathcal{A}}\tilde{p}(s)\pi(a|s)\log\pi(a|s)$ (5) s.t. $\displaystyle\mathbb{E}_{\tilde{\pi}}[f_{i}]-\mathbb{E}_{{\pi}}[f_{i}]=0\quad i=1,\ldots,n$ $\displaystyle\sum_{a\in\mathcal{A}}\pi(a|s)-1=0\quad\forall\ s\in\mathcal{S}$ To derive the dual problem we will use the Lagrange method for convex optimization problems. $\Lambda(\pi,\lambda,\mu)\equiv H(\pi)+\displaystyle\sum_{i=1}^{N}\lambda_{i}\Big{(}\mathbb{E}_{{\pi}}[f_{i}]-\mathbb{E}_{\tilde{\pi}}[f_{i}]\Big{)}+\displaystyle\sum_{s\in\mathcal{S}}\tilde{p}(s)\mu_{s}\Big{(}\displaystyle\sum_{a\in\mathcal{A}}\pi(a|s)-1\Big{)}$ (6) Where $\lambda_{i}$, $\mu_{s}$ are the Lagrangian’s multiplier corresponding to each constraint. $\Lambda(\pi,\lambda,\mu)\equiv-\displaystyle\sum_{s\in\mathcal{S}}\tilde{p}(s)\displaystyle\sum_{a\in\mathcal{A}}\pi(a\mid s)\log\pi(a\mid s)+\displaystyle\sum_{i=1}^{N}\lambda_{i}\bigg{(}\mathbb{E}_{{\pi}}[f_{i}]-\mathbb{E}_{\tilde{\pi}}[f_{i}]\bigg{)}+\displaystyle\sum_{s\in\mathcal{S}}\tilde{p}(s)\mu_{s}\Big{(}\displaystyle\sum_{a\in\mathcal{A}}\pi(a|s)-1\Big{)}$ (7) By Differentiating the Lagrangian with respect to primal variables $p(s|a)$ and letting them to be zero, we obtain: $\frac{\partial\Lambda}{\partial\pi(a|s)}=-\displaystyle\sum_{s\in\mathcal{S}}\tilde{p}(s)\Big{(}1+\displaystyle\sum_{a\in\mathcal{A}}\log\pi(a|s)\Big{)}+\displaystyle\sum_{i=1}^{N}\lambda_{i}\Big{(}\displaystyle\sum_{s\in\mathcal{S}}\tilde{p}(s)\displaystyle\sum_{a\in\mathcal{A}}f(s,a)\Big{)}+\displaystyle\sum_{s\in\mathcal{S}}\tilde{p}(s)\mu_{s}$ (8) $-\displaystyle\sum_{s\in\mathcal{S}}\tilde{p}(s)\Big{(}1+\displaystyle\sum_{a\in\mathcal{A}}\log\pi(a|s)\Big{)}+\displaystyle\sum_{i=1}^{N}\lambda_{i}\Big{(}\displaystyle\sum_{s\in\mathcal{S}}\tilde{p}(s)\displaystyle\sum_{a\in\mathcal{A}}f(s,a)\Big{)}+\displaystyle\sum_{s\in\mathcal{S}}\tilde{p}(s)\mu_{s}=0$ (9) $\displaystyle\sum_{s\in\mathcal{S}}\tilde{p}(s)\Bigg{(}-1-\displaystyle\sum_{a\in\mathcal{A}}\log\pi(a|s)+\displaystyle\sum_{i=1}^{N}\lambda_{i}\Big{(}\displaystyle\sum_{a\in\mathcal{A}}f(s,a)\Big{)}+\mu_{s}\Bigg{)}=0$ (10) Assuming $\tilde{p}(s)\neq 0$, $\log\pi(a|s)=\displaystyle\sum_{i=1}^{N}\lambda_{i}\Big{(}f_{i}(s,a)\Big{)}+\mu_{s}-1$ (11) $\pi(a|s)=\exp\bigg{(}\displaystyle\sum_{i=1}^{N}\lambda_{i}\Big{(}f_{i}(s,a)\Big{)}\bigg{)}\cdot\exp\Big{(}\mu_{s}-1\Big{)}$ (12) Since $\displaystyle\sum_{a\in\mathcal{A}}\pi(a|s)=1$ $\displaystyle\sum_{a\in\mathcal{A}}\exp\bigg{(}\displaystyle\sum_{i=1}^{N}\lambda_{i}\Big{(}f_{i}(s,a)\Big{)}\bigg{)}\cdot\exp\Big{(}\mu_{s}-1\Big{)}=1$ (13) $\frac{1}{\displaystyle\sum_{a\in\mathcal{A}}\exp\bigg{(}\displaystyle\sum_{i=1}^{N}\lambda_{i}\Big{(}f_{i}(s,a)\Big{)}\bigg{)}}=\exp\Big{(}\mu_{s}-1\Big{)}=(z_{\lambda}(s))^{-1}$ (14) By substituting in Eq 12 we will get. $\pi^{*}(a|s)=(z_{\lambda}(s))^{-1}\cdot\exp\bigg{(}\displaystyle\sum_{i=1}^{N}\lambda_{i}\Big{(}f_{i}(s,a)\Big{)}\bigg{)}$ (15) Finally, the dual problem will be: $-\bigg{\\{}\max_{\lambda}\ \Lambda(\lambda)\equiv-\displaystyle\sum_{s\in\mathcal{S}}\tilde{p}(s)\log z_{\lambda}(s)+\displaystyle\sum_{i=1}^{N}\lambda_{i}\sum_{s\in\mathcal{S}}\sum_{a\in\mathcal{A}}\tilde{\pi}(s,a)f(s,a)\bigg{\\}}$ (16) ## Appendix B Appendix ### B.1 Dual Problem of Robust Maximum Behavior Cloning We will start from Eq. 16 and build upon it. As we mentioned in the main text we will introduce the $w$ weight as part of our model. $\tilde{p}_{w}(s)=\frac{1}{M}\displaystyle\sum_{d=1}^{D}w_{D}\tilde{p}(s|d)$ (17a) $\tilde{\pi}_{w}(s,a)=\frac{1}{M}\displaystyle\sum_{d=1}^{D}w_{D}\tilde{\pi}(s,a|d)$ (17b) --- By substituting in Eq 16 we will get. $\displaystyle\min_{w}\ \ \ -\ \bigg{\\{}\max_{\lambda}$ $\displaystyle\ \ \ \ \Lambda(\lambda)\equiv\frac{1}{M}\sum_{d=1}^{N}w_{d}\Big{(}-\sum_{s\in\mathcal{S}}\tilde{\pi}_{w}(s|d)\log z_{\lambda}(s)+\sum_{i=1}^{N}\lambda_{i}\sum_{s\in\mathcal{S}}\sum_{a\in\mathcal{A}}\tilde{\pi}_{w}(s,a|d)f(s,a)\Big{)}\bigg{\\}}$ (18) s.t. $\displaystyle\sum_{d=1}^{D}w_{d}=M$ $\displaystyle w_{d}\geq 0\quad\forall d\in\mathcal{D}={1....D}$ $\displaystyle w_{d}\leq 1\quad\forall d\in\mathcal{D}={1,\ldots,D}$ For simplification let’s assume: $a_{d}=\displaystyle\sum_{s\in\mathcal{S}}\tilde{\pi}(s|d)\log z_{\lambda}(s)$ (19) $b_{d}=\displaystyle\sum_{i=1}^{N}\lambda_{i}\sum_{s\in\mathcal{S}}\sum_{a\in\mathcal{A}}\tilde{\pi}(s,a|d)f(s,a)$ (20) $c_{d}=b_{d}-a_{d}\quad\forall d\in D={1,\ldots,D}$ (21) $\displaystyle\min_{w}\ \ \ -\ \Bigl{\\{}\max_{\lambda}$ $\displaystyle\Lambda(\lambda)\equiv\frac{1}{M}\sum_{d=1}^{D}w_{d}c_{d}\Bigr{\\}}$ (22) s.t. $\displaystyle\sum_{d=1}^{N}w_{d}=M$ $\displaystyle w_{d}\leq 1\quad\forall d\in\mathcal{D}$ $\displaystyle w_{d}\geq 0\quad\forall d\in\mathcal{D}$ By moving the negative sign to inside we will reach our final optimization problem. $\displaystyle\min_{\lambda,w}$ $\displaystyle\Lambda(\lambda)\equiv-\frac{1}{M}\sum_{d=1}^{D}w_{d}c_{d}$ (23) s.t. $\displaystyle\sum_{d=1}^{D}w_{d}=M$ $\displaystyle w_{d}\leq 1\quad\forall d\in D={1....D}$ $\displaystyle w_{d}\geq 0\quad\forall d\in D={1....D}$ From the last equation, we can see its a non-convex problem, we used Sequential Quadratic Programming (SQP) approach to solve this problem, the basic SQP algorithm is described in chapter 18 of Nocedal and Wright [16]. SQP approach allows you to closely mimic Newton’s method for constrained optimization just as is done for unconstrained optimization. At each major iteration, an approximation is made of the Hessian of the Lagrangian function using a quasi-Newton updating method. This is then used to generate a Quadratic Programming (QP) subproblem whose solution is used to form a search direction for a line search procedure. we leveraged the function implementation in Matlab and used it to solve our problem. 555https://www.mathworks.com/help/optim/ug/constrained-nonlinear-optimization- algorithms.html#bsgppl4 ## Appendix C Appendix ### C.1 Grid World Experiment Table 1: Results of grid world Experiment | Demo. number | Demo. Type | Accuracy | Weights | Cor. / Adv. ---|---|---|---|---|--- Mixture of correct | demo._1, Fig.1(a) | “correct” | 100 % Fig.2 | 0.5 | 2/0 demo._2, Fig.1(b) | “correct” | 0.5 Correct & Adversarial | demo._1, Fig.1(a) | “correct” | 83 % Fig.2 | 0.5 | 2/1 demo._2, Fig.1(a) | “correct” | 0.5 demo._3, Fig.1(c) | “adversarial” | 0.0 Correct & Random | demo._1, Fig.1(b) | “correct” | 92 % Fig.2 | 0.5 | 2/3 demo._2, Fig.1(b) | “correct” | 0.5 demo._3, Fig.1(d) | “random” | 0.0 demo._4, Fig.1(d) | “random” | 0.0 | | demo._5, Fig.1(d) | “random” | | 0.0 | ### C.2 Classical Control Experiments in OpenAI-Gym Simulator Details The expert data was generated using TRPO [22] on the true cost functions. For the adversarial demonstrations, we simply manipulated the actions of the expert data. For example, in the mountain car, we had two actions 0,1. If the expert data was taking action 0 with a specific observation we replaced it with action 1 and vice versa. The idea behind that is to generate an adversarial demonstration that tries to fool the algorithm. For a fair comparison, we used the same experimental settings as in [11], including the exact neural network architectures for the policies and the optimizer parameters for TRPO [22] for all of the algorithms except ours which do not use any neural network. The amount of environment interaction used for FEM and GTAL is shown in Table 2 . A reminder that BC and RM-ENT do not use any more samples during the training. Table 2: Parameter for FEM and GTAL Task | Training iterations | State-action pairs per iteration ---|---|--- Mountain Car | 300 | 5000 Acrobot | 300 | 5000
4k
arxiv_papers
2101.01255
11institutetext: Dept. of Comp. Sci. and Engg., Indian Institute of Technology Kharagpur, India 22institutetext: Verimag, Univ. Grenoble Alpes, France # Quantitative Corner Case Feature Analysis of Hybrid Automata with ForFETSMT Antonio Anastasio Bruto da Costa 11 Pallab Dasgupta 11 Nikolaos Kekatos 22 ###### Abstract The analysis and verification of hybrid automata (HA) models against rich formal properties can be a challenging task. Existing methods and tools can mainly reason whether a given property is satisfied or violated. However, such qualitative answers might not provide sufficient information about the model behaviors. This paper presents the ForFETSMT tool which can be used to reason quantitatively about such properties. It employs _feature automata_ and can evaluate quantitative property corners of HA. ForFETSMT uses two third-party formal verification tools as its backbone: the SpaceEx reachability tool and the SMT solver dReach/dReal. Herein, we describe the design and implementation of ForFETSMT and present its functionalities and modules. To improve the usability of the tool for non-expert users, we also provide a list of quantitative property templates. ## 1 Introduction Formal verification techniques can provide guarantees of correctness and performance for hybrid and cyber-physical systems. Nowadays, they are supported by several robust verification tools, e.g. SpaceEx [11], dReal/dReach [13]. A common modeling formalism for the design of such systems is _hybrid automata_ [2] (HA). HA can exhibit non-deterministic behaviors and have been used to model control systems and analog mixed-signal circuit designs [4, 9]. Formalizing specifications of hybrid automata such that they can be verified automatically is not an easy task, especially in an industrial setting. There is a semantic mismatch between industrial requirements and formal specifications. Typically, industrial requirements are described in natural language _"The caliper speed at contact must be below 2 mm/s"_ , while formal specifications are expressed in a formal language, like temporal logic (TL), e.g. $\square(q\rightarrow(\square p))$ with $p:=\\{speed<=2mm/s\\}$ and $q:=\\{\text{caliper at contact}\\}$. Standard analysis tools [11] can answer reachability questions and can verify if given safety properties are satisfied. For more complex properties, one has to construct a monitor automaton and take its product with the HA [12, 14]. In practice, however, the resulting automaton can be large, resulting in long analysis times and scalability issues. In addition, the answer provided by the tool is qualitative, i.e. yes or no. It is not possible to support quantitative measures, e.g. by what extent was the specification violated? In addition, describing common system properties requires the ability to express quantitative measures such as overshoot, settling time, or other timing and value quantities. There are two directions to address this limitation. On the one hand, it is possible to use temporal logic. Much literature exists on TL, especially on Linear Temporal Logic (LTL) [18]. Languages such as MITL [3], STL [15] and its extensions such as xSTL [17] have been used for specifying specifications involving continuous signals. Some TL languages support the use of robustness metrics over properties [10]. Such metrics measure the distance of runs of the system from unsafe regions defined by the property. However, these languages are primarily designed to express specification correctness, and it can be tedious to use them to express quantitative measures. The other direction is to use features [1]. Unlike temporal logics like MITL or STL, the language of features is designed to explicitly specify quantitative measures. The quantity is expressed as a computation resulting from matching a behaviour description. ForFET [6] is a tool for computing an over-approximation for features, where the evaluation of a feature, written in the Feature Indented Assertion (FIA) language, over runs of a HA is automated. In a quantitative analysis, knowledge of the stimulus that produces the best and worst-case quantity (minimum or maximum) provides insight into the system and on how to modify the design to more robustly adhere to specifications. The tool ForFETSMT addresses the needs described above, extending ForFET, with the following: * • Feature corner analysis using SMT. Using SMT has two advantages, it allows us to refine the feature range beyond what ForFET produces, and also enables us to generate a witness trace describing the stimulus and behaviour for best and worst-case quantities. * • Support for parameterized features and an extended language for features having mixed urgent and non-urgent semantics. * • Usability and support: i) two translators, written in Matlab and Octave, for converting models from SpaceEx formalism to ForFET’s modeling language, ii) support for custom paths for workspace, models, and third-party tools. ## 2 Design and Implementation module buck(v,i,t) output v,i,t; parameter Vr = 12, ... , b1c = 0, T = 1e-05, D = 0.51667; mode closed begin ddt t = 1; ddt v = (a10c*i + a11c*v + b1c*Vs); ddt i = (a00c*i + a01c*v + b0c*Vs); end ... property inv closed mode==closed |=> t<=D * T && t>=0; endproperty property trans closed_open mode==closed && mode’==open && t>=D*T |=> i’==i && t’==0 && v’==v; endproperty ... initial begin set begin mode == closed; i == 0; v == 0; t == 0; end end endmodule feature settlingTime(Vr,E); begin var st; (v>=Vr+E) ##[0:$] @+(state==Open) && (v<=Vr+E), st=$time ##[0:$] @+(state==Open) && (v<=Vr+E) |-> settlingTime = st; end Figure 1: HA of the Buck Regulator, HASLAC Code-snippet of HA description, Quantitative Specification of Settling Time as a feature. We begin with a running example, as shown in Figure 1, to explain the inputs to ForFET${}^{SMT}.$ The example is of a buck regulator taken from standard benchmarks [16]. The regulator receives an input voltage, and in the event of reasonably varying loads ensures that it provides an unchanging output voltage. ### 2.0.1 Hybrid Automaton Description in HASLAC An input to the tool is the HA description. The HA of the buck regulator has two locations, open and closed, indicating the state of the switch that charges the capacitor of the regulator. The description of the HA is specified in the Hybrid Automaton Specification Language for Analog Mixed-Signal (AMS) Circuits (HASLAC), as the model description language for ForFET and ForFETSMT. HASLAC is specially designed to mimic semiconductor circuit behavioural model description languages such as Verilong-AMS, to make adoption of formal analysis in the semiconductor circuit design flow less intimidating. HASLAC describes each location of the HA as a mode, with each transition and invariant expressed as a property. In the model description, v and i are aliases for the HA variables ${\tt x_{1}}$ and ${\tt x_{2}}$. ### 2.0.2 Quantitative Specification using Features A feature defines, formally, a quantitative specification, i.e. a measurement over behaviours of the system. Unlike properties, which either match or fail, having a Boolean outcome, the outcome of evaluating a feature is a real-valued interval. The language of features is easier to use and understand for non- experts, especially in the AMS domain, and it can be evaluated with the use of reachability tools. In our example, the intent to measure the time taken for the output voltage to settle into a stable state can be expressed as the feature settlingTime. The feature contains three core components: (i) a set of behaviours over which measurements are made, (ii) variables, local to a feature, that may be assigned values in the antecedent, as a matching behaviour is observed, and (iii) the feature compute expression, over local variables, evaluated once the behaviour has matched. The behaviour described by the feature in Figure 1 reads as follows, "(v<=Vr+E) is true and thereafter v settles below (Vr+E) for two successive openings of the capacitor switch". The expressions (v>=Vr+E) and (v<=Vr+E) are predicates over real-variables (PORVs). state is a special variable allowing us to write predicates over the location labels of the HA. The construct @+(P) represents an event, and is true only on the positive edge of the predicate P. A behaviour in the feature expresses a sequence of Boolean expressions over PORVs and events separated by time-delays. The statement "P ##[a:b] Q" is true whenever Q occurs within a time interval of a and b from when P is true; $a,b\in\mathbb{R}^{+}$, $b\geq a$. The syntax ##[a:b] represents a time-delay. The symbol $ represents the notion "anytime after a". Observe that P can be true over a dense time interval, and for each point in the interval where "P ##[a:b]" is true, Q can be true yielding an infinite number of matches. A more complete description of the language for features is available in [7]. ###### Remark 1 A feature behaviour may match in one or multiple (potentially infinite) runs of the HA, at one or multiple (potentially infinite) time-points. Each match has the potential to yield a different feature value. Evaluating a feature over runs of a HA, therefore, yields an interval $[\mathcal{F}_{min},\mathcal{F}_{max}]$ of feature values. We call this a feature range. ### 2.0.3 Algorithm A functional overview of ForFETSMT is shown in Figure 2. The tool ForFET is marked within a blue box. ForFETSMT extends ForFET by introducing an iterative refinement step that refines the range provided by ForFET. It also introduces a wrapper around the SMT solver dReal in order to correctly visualize a trace that acts as a witness for each corner of the feature range. The tool works as follows. The user provides two inputs (Step 1): a hybrid automaton model $\mathcal{H}$ and a feature specification $\mathcal{F}$ (single or a set of features), ForFETSMT computes the product automaton (Step 2) according to [7]. Step 3 involves using SpaceEx [11] to compute reach-sets for the transformed model $\mathcal{H}_{F}$. This results in a feature range $[\mathcal{F}_{min},\mathcal{F}_{max}]$ computed as an evaluation of the feature expression on the runs matching the feature sequence-expression (Step 4). The feature range is refined iteratively through a search using an SMT solver (Steps 4 to 7). (a) Functional Overview (b) SMT Feature Refinement (c) Computing the left corner Figure 2: ForFETSMT: Corner Case Analysis HyST [5] converter is used internally to translate the model $\mathcal{H}_{f}$ into an acceptable format for use with dReach. In each interaction, called a query, between our tool and dReach, a goal statement is constructed to direct dReach to prove the existence/non-existence of a feature value in a given domain. Each query in Step 5 includes the model description for $\mathcal{H}_{f}$, a goal statement, and a maximum transition hop count $K$, which is translated by dReach into SMT clauses. The response of the SMT solver (Step 6) is either unsatisfiable or satisfiable. In the latter case, a single timed trace of the HA is made available. dReal generates a trace as a JSON file with time-stamped valuations for the variables of the automaton, which is parsed to identify the feature values for the trace. The search concludes in Step 7 with a refined feature range $[\mathcal{F}^{*}_{min},\mathcal{F}^{*}_{max}]$ as well as a trace corresponding to each feature range corner value. ### 2.0.4 Implementation ForFETSMT is implemented in C/C++. The parsers for features and the HASLAC language are implemented in flex and bison, which are translated into C/C++. The language was chosen due to its efficiency for handling complex operations and data-structures involved in computing the product automaton of the HA and feature monitor. We represent the algorithm as a function which takes the HA and the feature as inputs, and produces a range of feature values $[\mathcal{F}^{*}_{min},\mathcal{F}^{*}_{max}]$ as output along with a trace that acts as a witness for each extremal corner of the range. The algorithm is guaranteed to terminate for bounded time traces [7]. As such, it constitutes a procedure for computing feature ranges over bounded time horizons, which is expected in practice. ### 2.0.5 Challenges using dReal In general, HA use urgent locations to represent ordered discrete transformations. In a trace, dReal provides a series of indexed time-ordered tuples representing a trace satisfying the query. In our experience, when the model $\mathcal{H}_{f}$ contains urgent locations, dReal generates a NULL tuple representing a visit to an urgent location. Visualization tools provided by the authors of dReal do not support drawing traces containing a NULL tuple. To enable visualization for all traces generated by dReal, ForFETSMT post- processes traces generated by dReal. It eliminates all NULL tuples and re- indexes them to be consistent with the syntax expected by the visualization tool. ### 2.0.6 Support for SpaceEx Models The SpaceEx modeling language has become the quasi-standard interchange format for defining and describing HA in the formal verification community [4]. It offers a graphical user interface, respects the SX grammar [8] and the models are written as XML files. ForFETSMT accepts HA models written in HASLAC. To bridge this mismatch and facilitate the use of ForFETSMT with existing SpaceEx models and HA benchmarks, we provide two translators, written in MATLAB and Octave respectively. The translators require a SpaceEx model (necessary) and a configuration file (optional). They come with an XML parser (partly written in Java) and perform syntactic translation while also handling modeling differences. Note that there exist other converters tailored to hybrid automata and SpaceEx, e.g. HyST [5]. ## 3 Installation and Usage ### 3.0.1 Installation The ForFETSMT tool is available at a public GitHub repository. The repository may be cloned in full using the following command: git clone https://github.com/antoniobruto/ForFET2.git The tool is written in C/C++. Before building the tool, one needs to have g++, flex, bison, glib-2.0, json-glib-1.0 and C/C++ standard libraries for 32 bit binaries (ia32-libs on Ubuntu 10.04 and later). The tool can be compiled by running ./buildForFET.sh in the cloned directory. One must also ensure that SpaceEx111http://spaceex.imag.fr/sites/default/files/downloads/private/spaceex_exe-0.9.8f.tar.gz and the SMT translator and solver dReach222https://github.com/dreal/dreal3/releases and dReal${}^{\ref{dRealLink}}$ are installed and executable in the user’s path. The tool also uses the HA translator HyST (provided with ForFETSMT) which requires a java run-time environment to be installed. ### 3.0.2 Usage The compiled ForFETSMT tool can be interacted with through the command line. Once compiled, the tool binary resides within the forFET directory as the binary forFET. Standard invocation involves executing the binary with a configuration file, by running ./forFET CONFIG-FILE-NAME. The configuration file specifies where third party libraries may be found. An example configuration file is provided in forFET/default.cfg. ## 4 Experimental Evaluation In this Section, we present selected results on three models, provided with ForFETSMT, i.e. a battery charger, a cruise control, and a buck regulator. We tested a wide variety of features, capturing state-dependent, time-dependent, sequential-properties and combinations of them. Some of these properties can also be encoded as control specifications, e.g. overshoot or settling time. More details about the models, specifications, features, and analysis results can be found in the tool manual. For each model, the feature range is computed first using SpaceEx and is then refined using SMT. Our observations show that, except in one case, reachability analysis and SMT require similar time to compute the expected feature range. The exception is when the model switches location often, like the Buck Regulator. In these cases, SMT might be more vulnerable to the state-space explosion. However, the additional computation overhead leads to tighter feature ranges. ## 5 Concluding Remarks In this paper, we have presented the tool, ForFETSMT, that is a formal feature evaluation tool for hybrid automata (HA), emphasizing on its architecture and utilities. Features form a promising and practical research direction as they can be used on top of or alongside standard monitoring and hybrid reachability tools to provide quantitative measures about HA behaviors. ForFETSMT makes use of the HASLAC language for writing HA models and is linked to SpaceEx reachability tool and dReal/dReach SMT solver. Using such an SMT solver to compute features can produce concrete traces for feature corner points and lead to the generation of tighter feature ranges. The traces for corners of the feature range can provide insights that guide experts to refine their designs. ## References * [1] A. Ain, A. Bruto da Costa, and P. Dasgupta. Feature indented assertions for analog and mixed-signal validation. IEEE TCAD, PP(99):1–1, 2016. * [2] R. Alur et al. The algorithmic analysis of hybrid systems. Theoretical Computer Science, 138:3–34, 1995. * [3] R. Alur, T. Feder, and T. A. Henzinger. The benefits of relaxing punctuality. J. ACM, 43(1):116–146, Jan. 1996. * [4] A. ARCH. Benchmarks for continuous and hybrid system verification, 2015. * [5] S. Bak et al. HyST: A source transformation and translation tool for hybrid automaton models. In HSCC, Seattle, Washington, Apr. 2015. ACM. * [6] A. A. Bruto da Costa and P. Dasgupta. ForFET: A Formal Feature Evaluation Tool for Hybrid Systems. In Proc. of ATVA, pages 437–445, 2017. * [7] A. A. Bruto da Costa, G. Frehse, and P. Dasgupta. Formal feature interpretation of hybrid systems. IEEE TCAD, 37(11):2474–2484, 2018. * [8] S. Cotton, G. Frehse, and O. Lebeltel. The spaceex modeling language, 2010. * [9] T. Dang et al. Verification of analog and mixed-signal circuits using hybrid system techniques. In FMCAD, pages 21–36. 2004. * [10] J. V. Deshmukh et al. Robust online monitoring of signal temporal logic. Formal Methods in System Design, 51(1):5–30, 2017. * [11] G. Frehse et al. SpaceEx: Scalable Verification of Hybrid Systems. In CAV, 2011. * [12] G. Frehse et al. A toolchain for verifying safety properties of hybrid automata via pattern templates. In ACC, pages 2384–2391, June 2018. * [13] S. Gao et al. dreal: An SMT solver for nonlinear theories over the reals. In CADE, pages 208–214, 2013. * [14] N. Kekatos. Formal Verification of Cyber-Physical Systems in the Industrial Model-Based Design Process. PhD thesis, 2018. * [15] O. Maler and D. Nickovic. Monitoring temporal properties of continuous signals. In FORMATS-FTRTFT, pages 152–166. Springer, 2004. * [16] L. V. Nguyen and T. T. Johnson. Benchmark: DC-to-DC Switched-Mode Power Converters. In ARCH14-15, volume 34, pages 19–24, 2015. * [17] D. Nickovic et al. AMT 2.0: Qualitative and quantitative trace analysis with extended signal temporal logic. In TACAS, pages 303–319, 2018. * [18] A. Pnueli. The temporal logic of programs. In Proceedings of the 18th Annual Symposium on Foundations of Computer Science, SFCS ’77, pages 46–57, Washington, DC, USA, 1977. IEEE Computer Society.
4k
arxiv_papers
2101.01256
Three lens space summands from the Poincaré homology sphere] Three lens space summands from the Poincaré homology sphere Jacob Caudell]Jacob Caudell A regular fiber of the Seifert fibering of the Poincaré homology sphere admits a Dehn surgery to $L(2,1)\#L(3,2)\#L(5,4)$. We prove that this is the only knot in the Poincaré homology sphere with a surgery to a connected sum of more than two lens spaces. Department of Mathematics, Boston College Chestnut Hill, MA 02467 § INTRODUCTION. §.§ Background. A classical theorem, independently proved by Lickorish [12] and by Wallace [21], implies that for any pair of closed orientable 3-manifolds $Y$ and $Y'$, there exists a link $L\subset Y$ admitting a Dehn surgery to $Y'$. When can we characterize a knot in a given 3-manifold by the Dehn surgeries it admits? Since Moser's classification of surgeries on torus knots in $S^3$ almost 50 years ago [14], Dehn surgery characterization problems have sustained the interest of 3-manifold topologists for decades—for example, there is the Berge conjecture <cit.> and the cabling conjecture <cit.>, <cit.>, among other problems. One of the most celebrated results in this direction to date is the Dehn surgery characterization of the unknot: the unknot is the only knot in $S^3$ that admits a non-trivial surgery to $S^3$ [6]. In this paper we give a Dehn surgery characterization of a knot in the Poincaré homology sphere $\mathcal P$. §.§ A notable surgery. Consider $\mathcal P$ as the Seifert fibered space $M((2,-1),(3,1),(5,1))$ <cit.>, and note that any Seifert fibering of $\mathcal P$ is isotopic to this one <cit.>. Let $F\subset \mathcal P$ be a regular fiber of $\mathcal P$, the isotopy class of which is unambiguous. Note that the two surgery diagrams in Figure 1 both present $L(2,1)\#L(3,2)\#L(5,4)$. The designated surgery slope of $0$ on $F$ specifies the slope represented by a regular fiber on the boundary of the exterior of $F$ in $\mathcal P$, $\mathcal P_F$. We are now ready to state the main result of the current work. [clip width = 5, flip crossing = 1, flip crossing = 4, flip crossing = 5, flip crossing = 7] [ultra thick] (0,0) to[curve through=(-2,2)..(0,4)..(2,2)] (0,0); at (0,-1.5) $-2$; at (-3.5,2) $3$; at (3.5,2) $5$; at (-1.6,0.40) $0$; [red] at (0,2.7) $0$; [red] at (0,3.3) $F$; [ultra thick] (-2.25,1) to[curve through=(-3.25,2)..(-2.25,3)..(-1.25,2)](-2.25,1); [ultra thick] (0,-1.25) to[curve through=(-1,-.25)..(0,.75)..(1,-.25)](0,-1.25); [ultra thick] (2.25,1) to[curve through=(1.25,2)..(2.25,3)..(3.25,2)](2.25,1); [ultra thick, red] (0,3) to[curve through = (-1,4)..(0,5)..(1,4)](0,3); at (0+8,1) $-2$; at (-2.25+8,3.25) $3$; at (2.25+8,3.25) $5$; [ultra thick] (-2.25+8,1) to[curve through=(-3.25+8,2)..(-2.25+8,3)..(-1.25+8,2)](-2.25+8,1); [ultra thick] (0+8,-1.25) to[curve through=(-1+8,-.25)..(0+8,.75)..(1+8,-.25)](0+8,-1.25); [ultra thick] (2.25+8,1) to[curve through=(1.25+8,2)..(2.25+8,3)..(3.25+8,2)](2.25+8,1); The two surgery diagrams above are related by a slam-dunk in Kirby calculus. They both present $L(2,1)\#L(3,2)\#L(5,4)$. Let $K \subset \mathcal P$. Suppose $K$ admits a Dehn surgery to a connected sum of more than two lens spaces. Then $K = F$. §.§ Changemakers reloaded. From the homological perspective, $\mathcal P$ is one of the simplest non-trivial 3-manifolds, as $H_*(\mathcal P) \cong H_*(S^3)$. From the cut and paste perspective, lens spaces—the 3-manifolds obtained by identifying two solid tori along their boundaries—are the simplest non-trivial 3-manifolds. From the perspective of Ozsváth–Szabó's Heegaard Floer homology, $\mathcal P$, lens spaces, and connected sums thereof are all as simple as possible, as they realize equality in the bound $\rk\ \widehat{HF}(Y)\geq |H_1(Y)|$. We call such a 3-manifold an L-space. We call a knot in any 3-manifold with a non-trivial surgery to an L-space an L-space knot. Our proof of Theorem 1 relies on the same pair of complementary genus bounds as did Greene's proof of the cabling conjecture for connected sums of lens spaces in [8]. In particular, we make use of the fact that $\mathcal P$ and $L(p,q)$ are L-spaces. Recall that for $K$ a knot in an integer homology 3-sphere there is a canonical identification of the set of slopes on $K$ with $\mathbb Q \cup \{1/0\}$. An appeal to the surgery exact triangle in Heegaard Floer homology gives us a favorable genus bound: for $K\subset \mathcal P$ and $p/q>0$, if $p/q$-surgery on $K$, denoted by $K(p/q)$, is an L-space, then $p/q\geq 2g(K)-1$, where here $g(K)$ denotes the minimum genus of an orientable surface in $\mathcal P$ bounded by $K$. We first show that if $K(p/q)$ is a connected sum of more than two lens spaces, then $p/q>2g(K)-1$. With this strict inequality, we may use the work of Matignon–Sayari [13], building on work of Hoffman [9], to find an essential 2-sphere in $K(p/q)$ that meets the core of the surgery in two points. With this 2-sphere in mind, we obtain a Seifert fibering of $\mathcal P_K$ from which we deduce Theorem 1. We show that $K(2g(K)-1)$ is never a connected sum of more than two lens spaces by way of changemaker lattice embeddings. Let $\{e_0,e_1,\ldots, e_n\}$ be an orthonormal basis for $-\mathbb Z^{n+1}$. A vector $\sigma = (\sigma_0,\sigma_1,\ldots, \sigma_n) \in -\mathbb Z^{n+1}$ is a changemaker if $0\leq \sigma_0\leq \ldots\leq \sigma_n$ and for all $i \in \{1, \ldots, n\}$, $\sigma_i\leq \sum_{j=1}^{i-1}\sigma_j + 1$. A lattice $L$ is a changemaker lattice if $L$ embeds in $-\mathbb Z^{\rk L+1}$ as the orthogonal complement to a changemaker $\sigma\in -\mathbb Z^{\rk L +1}$. For $X$ a compact 4-manifold, denote by $Q_X$ the free $\mathbb Z$-module $H_2(X)/\text{Tors}$ equipped with the integer-valued symmetric bilinear form given by the intersection pairing of surfaces in $X$. Along the way to Theorem 1, we prove the following. Let $K\subset \mathcal P$ be an L-space knot. If $K(2g(K)-1)$ bounds a sharp (cf. Definition 6) 4-manifold $X$ with $\rk\ {H_2(X)}=n$ and $H_1(X)$ torsion-free, then $Q_X$ embeds in $-\mathbb Z^{n+1}$ as the orthogonal complement to a changemaker $\sigma$ with $\langle \sigma, \sigma\rangle = -(2g(K)-1)$. In particular, we make use of the following variant of Donaldson's theorem originally due to Frøyshov [4] as Greene suggests in <cit.>. Let $Z$ be a smooth, oriented, compact 4-manifold with $\rk \ Q_Z = n$ and $\partial Z = \mathcal P$, oriented as above. If $Z$ is negative definite, then $Q_Z \cong -\mathbb Z^n$ or $Q_Z\cong -E_8 \oplus -\mathbb Z^{n-8}$, where $-E_8$ is the integer lattice whose pairing is given by the matrix \[ \begin{bmatrix} -2 & 1 & 1 & 0 & 1 & 0 & 0 & 0\\ 1 & -2 & 0 & 0 & 0 & 0 & 0 & 0\\ 1 & 0 & -2 & 1 & 0 & 0 & 0 & 0\\ 0 & 0 & 1 & -2 & 0 & 0 & 0 & 0\\ 1 & 0 & 0 & 0 & -2 & 1 & 0 & 0\\ 0 & 0 & 0 & 0 & 1 & -2 & 1 & 0\\ 0 & 0 & 0 & 0 & 0 & 1 & -2 & 1\\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & -2 \end{bmatrix}. \] §.§ A generalized cabling conjecture for connected sums of lens spaces. Contrast Theorem 1 with the abundance of knot surgeries from $S^3$ to connected sums of lens spaces. The 3-sphere admits infinitely many distinct Seifert fiberings over $S^2$ with two exceptional fibers; each torus knot appears as a regular fiber of some Seifert fibering of $S^3$. Each torus knot admits a surgery to a connected sum of lens spaces—as do appropriate cables thereof. Greene proved that torus knots and their cables account for all knots in $S^3$ with surgeries to connected sums of lens spaces. A lens space also admits infinitely many Seifert fiberings over $S^2$ with two exceptional fibers, and thus contains infinitely many knots with surgeries to a connected sum of lens spaces. Moreover, Baker showed that for any pair of integers $r$ and $s$, there are infinitely many lens spaces containing hyperbolic knots admitting surgeries to $L(r,1)\#L(s,1)$ <cit.>, subsuming all previously known examples of hyperbolic knots in lens spaces with surgeries to connected sums of lens spaces. The uniqueness of the surgery in Theorem 1 leads us to pose the following conjecture. Let $K$ be a knot in a 3-manifold $Y$ admitting a Seifert fibering over $S^2$ with $n\geq 3$ exceptional fibers. If Dehn surgery on $K$ yields a connected sum of more than $n-1$ lens spaces, then there is a Seifert fibering of $Y$ where $K$ is a regular fiber. In private correspondence with Baker, we learned of a conjecture more general than Conjecture 5. It posits that if a one-cusped hyperbolic 3-manifold admits two exceptional Dehn fillings with one of the fillings non-prime, then the non-prime filling has only two summands. §.§ Organization. In Section 2 we collect the necessary input from Heegaard Floer homology and lattice theory to prove that $K(2g(K)-1)$ is never a connected sum of more than two lens spaces for $K \subset \mathcal P$. In Section 3 we invoke the main theorem of Matignon–Sayari <cit.> to show that a knot satisfying the hypotheses of Theorem 1 is a regular fiber of $\mathcal P$. §.§ Conventions. All manifolds are assumed to be smooth and oriented. All homology groups are taken with integer coefficients. Denote by $U$ the unknot in $S^3$. We orient the lens space $L(p,q)$ as $U(-p/q)$. §.§ Acknowledgments. Thanks to my advisor, Josh Greene, for many insightful and enjoyable conversations, for his enduring support, and for helpful feedback on an early version of this paper. Thanks to Ken Baker for an interesting conversation about Conjecture 5. § INPUT FROM FLOER HOMOLOGY. §.§ A negative definite 4-manifold with boundary $\mathcal P$. Recall that to a rational homology sphere $Y$ equipped with a spin$^\text{c}$ structure $\mathfrak t$, Ozsváth–Szabó associated a numerical invariant $d(Y,\mathfrak t)\in \mathbb Q$, called a correction term, satisfying $d(-Y,\mathfrak t)=-d(Y,\mathfrak t)$. They also proved that if $Y$ is the boundary of a negative definite 4-manifold $X$, then \begin{equation} c_1(\mathfrak s)^2 + b_2(X) \leq 4d(Y, \mathfrak t) \end{equation} for every $\mathfrak s \in \text{Spin}^\text{c}(X)$ extending $\mathfrak t \in \text{Spin}^\text{c}(Y)$. A negative definite 4-manifold $X$ is sharp if, for every $\mathfrak t \in \text{Spin}^\text{c}(Y)$, there exists some extension $\mathfrak s \in \text{Spin}^\text{c}(X)$ attaining equality in the bound $(1)$. Following Greene's construction in <cit.>, let us consider a knot $K$ in an integer homology sphere $Y$ and suppose that $K(p)$ bounds a negitive definite 4-manifold $X$ with $H_1(X)$ torsion-free, where $p$ is some positive integer. Denote the trace cobordism of $p$-surgery on $K$ by $W_p(K)$, and let $W=-W_p(K)$. The homology class of the closed surface obtained by capping off a Seifert surface for $K$ with the core of the 2-handle attachment, $\Sigma$, generates $H_2(W)$ and satisfies $\langle[\Sigma],[\Sigma]\rangle = -p$. Form the oriented 4-manifold \begin{equation} Z := W\cup_{K(p)}X, \end{equation} and note that $Z$ is negative definite with $\rk \ Q_Z=\rk\ Q_W+\rk \ Q_X$ and $\partial Z = Y$. We identify $\text{Spin}^\text{c}(K(p)) \cong \mathbb Z/p\mathbb Z$ as follows. Every $\mathfrak t\in \text{Spin}^\text{c}(K(p))$ extends to some $\mathfrak s \in \text{Spin}^\text{c}(W_p(K))$, and the residue of $\langle c_1(\mathfrak s), [\Sigma]\rangle + p$ mod $2p$ is an even integer $2i$ that is independent of the choice of extension $\mathfrak s$. The assignment $\mathfrak t \mapsto i$ gives the desired identification. With this notation in place we state the following lemma. Its proof is no different than that of <cit.>, which treats the case when $Y=S^3$. Suppose that $K(p)$ bounds a smooth, negative definite 4-manifold $X$ with $\rk \ H_2(X)=n$ and $H_1(X)$ torsion-free, and form $Z = W\cup X$ as in (2). Then every $i \in \text{Spin}^\text{c}(K(p))$ extends to some $\mathfrak s \in \text{Spin}^\text{c}(Z)$, and \begin{equation} c_1(\mathfrak s)^2 + (n+1) \leq 4d(K(p),i)-4d(U(p), i). \end{equation} Furthermore, if $X$ is sharp, then for every $i$ there exists some extension $\mathfrak s$ that attains equality in $(3)$. We now assume that $Y$ is an L-space. Let $K\subset Y$ be an L-space knot, and consider the Alexander polynomial of $K$, \begin{equation*} \Delta_K(T) = \sum_{j=-g}^ga_j\cdot T^j, \ \ g := \deg(\Delta_K), \end{equation*} and define the torsion coefficient \begin{equation*} t_i(K) = \sum_{j\geq 1}j \cdot a_{|i|+j}. \end{equation*} Tange noted that the torsion coefficients of an L-space knot in any L-space integer homology sphere $Y$ with irreducible exterior have many of the same properties as they do for L-space knots in $S^3$ [20]. Namely, for $i\geq 0$, they form a non-increasing sequence of non-negative integers [17], with $t_i(K) = 0$ if and only if $i\geq g = g(K)$ [16]. Tange also states the following result explicitly. Its proof follows from that of <cit.>, replacing $S^3$ by an arbitrary integer homology sphere L-space. Let $K$ be an L-space knot in an integer homology sphere L-space $Y$, and let $p$ be a positive integer. Then the torsion coefficients and correction terms satisfy \begin{equation} d(Y) - 2t_i(K) = d(K(p),i)-d(U(p),i),\text{ for all } |i| \leq p/2. \end{equation} We now focus our attention to the left-hand side of (3). The first Chern class map \begin{equation*} c_1: \text{Spin}^\text{c}(Z) \to H^2(Z) \end{equation*} has image the set of characteristic covectors for $Q_Z$. Identify $H_2(Z) \cong H^2(Z,Y)\cong H^2(Z)$ first by Poincaré duality then by the long exact sequence of a pair in cohomology; then this set corresponds to \begin{equation*} \Char(Q_Z) = \big\{\mathfrak c \ |\ \langle \mathfrak c, v\rangle \equiv \langle v, v \rangle\ \mod 2 \text{ for all } v \in Q_Z\big\}. \end{equation*} Write $\tau$ for the image of the class $[\Sigma]$ under the inclusion $H_2(W)\into H_2(Z)$. With the preceding notation in place, the following lemma follows on combination of Lemma 7 with Theorem 8. Let $K$ denote an L-space knot in an L-space integer homology sphere $Y$, and suppose that $K(p)$ bounds a smooth, negative definite 4-manifold $X$ with $\rk \ H_2(X) = n$ and $H_1(X)$ torsion-free. Then \begin{equation} \mathfrak c^2 + (n+1)-4d(Y)\leq -8t_i(K) \end{equation} for all $|i|\leq p/2$ and $\mathfrak c \in \text{Char}(Q_Z)$ such that $\langle \mathfrak c, \tau\rangle + p\equiv 2i \ \mod 2p$. Furthermore, if $X$ is sharp, then for every $|i|\leq p/2$ there exists $\mathfrak c$ attaining equality in (5). Greene arrived at the notion of a changemaker lattice from a careful analysis of how Lemma 9 says elements of $\Char(Q_Z)$ of self pairing $-(n+1)$ must pair against $\tau$ when $Y= S^3$, in which case Donaldson's theorem implies $Q_Z\cong -\mathbb Z^{n+1}$. Changemaker lattices also arise for us by considering Theorem 4 in the case that a small positive integer surgery on an L-space knot in $\mathcal P$ bounds a sharp 4-manifold. Let $Y = \mathcal P$, and let $Z$ be as in (2). If $p \leq 2g(K)-1$, then $Q_Z\cong -\mathbb Z^{n+1}$. Since $Q_Z$ is negative definite, it follows by Theorem 4 that $Q_Z\cong -\mathbb Z^{n+1}$ or $Q_Z \cong -E_8\oplus -\mathbb Z^{n-7}$. Suppose $Q_Z\cong -E_8 \oplus -\mathbb Z^{n-7}$, and let $\tau = (s, \sigma)$ where $s \in - E_8$, $\sigma\in -\mathbb Z^{n-7}$, and by change of basis we have $0\leq \sigma_0 \leq \sigma_1\leq \ldots \leq \sigma_{n-8}$, and let $\mathfrak c = (\underbrace{0,\ldots,0}_8,\underbrace{1,\ldots,1}_{n-7})$. Then $$0\geq \langle \mathfrak c, \tau\rangle\geq \langle \tau, \tau\rangle=-p.$$ Since $0 = \mathfrak c ^2 + n+1 - 8 \leq -8t_i(K)$ for $i = \frac{\langle\mathfrak c, \tau\rangle + p}{2}$ by Lemma 9, it follows that $t_i(K)=0$, so $i\geq g(K)$, $\langle \mathfrak c, \tau \rangle + p \geq 2g(K)$, and therefore $p>2g(K)-1$. By Lemma 10, we have an embedding $Q_W\oplus Q_X\hookrightarrow Q_Z = -\mathbb Z^{n+1}$, where the generator of $Q_W$ is sent to some $\tau\in -\mathbb Z^{n+1}$ satisfying $0\leq \tau_0\leq\cdots\leq \tau_n$ and $\langle \tau, \tau\rangle =-(2g(K)-1)$. By Lemma 9, we have that $\mathfrak c^2 + (n+1) -8 \leq -8t_i(K)$ for all $|i|\leq (2g(K)-1)/2$ and $\mathfrak c \in \Char(-\mathbb Z^{n+1})$ such that $\langle\mathfrak c , \tau \rangle + 2g(K)-1 \equiv 2i \ \mod 2(2g(K)-1)$, and, by the sharpness of $X$, for every $|i|\leq (2g(K)-1)/2$ there exists $\mathfrak c$ attaining equality. Note that for any $\mathfrak c \in \{\pm1\}^{n+1}$, we have $\mathfrak c^2 + (n+1) -8 = -8$ and $-8t_i(K)\leq -8$ for all $|i|\leq g(K)-1= \lfloor(2g(K)-1)/2\rfloor$. Let $f(K) = \min\{i\geq 0\ | \ t_i(K)= 1\}$. Then we have the equality \begin{equation} \{\langle \mathfrak c, \tau \rangle \ | \ \mathfrak c \in \{\pm 1\}^{n+1}\} = \{j \in [(2g(K)-1)-2f(K), (2g(K)-1) + 2f(K)]\ | \ j \equiv 1 \ \mod 2\}, \end{equation} which we rewrite as \begin{equation} \{|\langle \chi, \tau\rangle|\ |\ \chi \in \{0,1\}^{n+1}\} = \{0, 1, \dots, |\tau|_1\}, \end{equation} from which see that $\tau$ is a changemaker <cit.>. To see that $Q_X = (\tau)^\perp$ on the nose, observe that $\rk \ Q_X = \rk \ (\tau)^\perp$ and $\disc(Q_X) = |H_1(K(2g(K)-1))| = 2g(K)-1 = |\tau| = \disc((\tau)^\perp)$ using <cit.> at the last step, so the two lattices coincide. §.§ Linear lattices. Let $p>q>0$ be integers. There is a unique continued fraction expansion $$p/q =[x_1,x_2,\ldots,x_n]^-= x_1 - \frac{1}{x_2-\frac{1}{\ddots- \frac{1}{x_n}}}$$ with each $x_i\geq 2$ an integer. The lens space $L(p,q)$ is the oriented boundary of the negative definite 4-manifold $X(p,q)$ given by attaching 4-dimensional 2-handles to a linear chain of $n$ unknots in the boundary of $B^4$, where the framing of the $i^\text{th}$ handle attachment is $-x_i$ (see Figure 2). We note that $X(p,q)$ is sharp, and that $Q_{X(p,q)}$ is indecomposable. [clip width = 5, flip crossing = 1, flip crossing = 3, flip crossing = 6] [ultra thick] (0,0) to[curve through=(-1,1)..(0,2)..(1,1)] (0,0); [ultra thick] (1.3,0) to[curve through=(.3,1)..(1.3,2)..(2.3,1)](1.1,0); [ultra thick] (2.6 + .5,1-0.86602540378) to[curve through=(2.6,0)..(1.6,1)..(2.6,2)] (2.6 + 0.5, 1+0.86602540378); at (3.6,1)$\Large \cdots$; [ultra thick] (3.6+.5, 1-0.86602540378) to[curve through=(4.6,0)..(5.6,1)..(4.6,2)] (3.6+.5,1+0.86602540378); [ultra thick] (5.9,0) to[curve through=(4.9,1)..(5.9,2)..(6.9,1)](5.9,0); at (0,2.3) $\Large -x_1$; at (1.3,2.3) $\Large -x_2$; at (2.6, 2.3) $\Large -x_3$; at (4.6, 2.3) $\Large -x_{n-1}$; at (5.9,2.3)$\Large -x_n$; A Kirby diagram for $X(p,q)$. The connected sum of lens spaces $\#_{i=1}^mL(p_i,q_i)$ bounds a canonical sharp 4-manifold $X:=\natural_{i=1}^m X(p_i,q_i)$, a Kirby diagram for which is given by the disjoint union of the surgery diagrams for $X(p_i,q_i)$, $1\leq i \leq m$. The lattice $Q_X \cong \oplus_{i=1}^mQ_{X(p_i,q_i)}$ then contains $m$ indecomposable summands. Improving on our initial appeal to the surgery exact triangle in Heegaard Floer homology, the basic result that a changemaker lattice has at most two indecomposable summands <cit.> allows us to conclude the following. If $K(p/q)$ is a connected sum of more than two lens spaces, then $p/q > 2g(K)-1$. If $K(p/q)$ is a connected sum of more than two lens spaces, it is an L-space bounding a sharp 4-manifold whose intersection form has more than two indecomposable summands, so if $p/q$ is positive, then $p/q>2g(K)-1$ by combining an appeal to the surgery exact triangle, Theorem 3, and <cit.>. If $p/q<0$, we may build a negative definite 4-manifold with boundary $-\mathcal P$—a contradiction since a familiar application of Donaldson's theorem would then imply that $-E_8$ embeds in a diagonal lattice, but it does not. By combining an appeal to the surgery exact triangle and the observation that $|H_1(K(p/q))|=|p|$, we see that $|p/q|>1$, since if $g(K)= 0$, then $K$ bounds a disk and thus $K(p/q)=\mathcal P \# -L(p,q)$. Let $x_i\geq 2$, $1\leq i\leq n$, be integers satisfying $[x_1,\ldots, x_n]^-=|p/q|$. Let $L = K_1 \cup \cdots \cup K_n$ be the framed link in $\mathcal P$ where $K_1 = K$, $K_i$ is a meridian of $K_{i-1}$ for $2\leq i\leq n$, and the framing of $K_i$ is $-x_i$, $1\leq i \leq n$. Denote by $W_{p/q}(L)$ the trace cobordism of surgery on the framed link $L$, which has negative definite intersection form and $\partial W_{p/q}(L)=-\mathcal P \coprod K(p/q)$. Let $X$ be the canonical sharp 4-manifold with boundary a connected sum of lens spaces oriented as $-K(p/q)$ described above. Then $Z:=W_{p/q}(L)\cup_{K(p/q)}X$ is negative definite with boundary $-\mathcal P$. We conclude that $p/q >2g(K)-1$. § AN ESSENTIAL ANNULUS IN $\MATHCAL P_K$. With Proposition 11 in hand, if $K(p/q)$ is a connected sum of more than two lens spaces, then we may use the following theorem of Matignon–Sayari to complete the proof of Theorem 1. Let $M$ be an irreducible 3-manifold with boundary a torus $T$. Let $\lambda$ be a slope on $T$ that bounds an orientable surface in $M$ and $g$ the genus of this surface. If there exists a reducing slope $r$, then either $\Delta(r, \lambda)\leq 2g-1$ or else the minimum geometric intersection number of an essential 2-sphere in $M(r)$ with the core of $r$-Dehn filling is 2. Suppose that $K\subset \mathcal P$ admits a surgery to a connected sum of more than two lens spaces. For us, $\lambda$ is the 0-framing of $K$, and the slope of this surgery $p/q$ is strictly greater than $2g-1$ by Proposition 11. In particular, $\Delta(0,p/q) = p >2g-1$, so $K(p/q)$ contains an essential 2-sphere which meets the core of $p/q$-Dehn surgery in exactly 2 points by Theorem 12. We use this information to complete the proof of Theorem 1 with the following two lemmas. Let $p/q > 2g-1$ with $q>1$, and suppose that $K(p/q)$ is reducible for $K\subset \mathcal P$. Then $K$ is an exceptional fiber of $\mathcal P$ and the $(p,q)$-cable of $K$ is a regular fiber. We invoke Theorem 12 to identify an essential 2-sphere $\hat A\subset K(p/q)$ which intersects the core of the surgery solid torus in two points. Let $A$ denote the essential annulus $\hat A \cap \mathcal P_K$. Then $A$ is separating in $\mathcal P_K$, and $\partial A$ separates $\partial \mathcal P_K$ into two annuli $A_1, A_2$ with $\partial A_i = \partial A$ for $i=1,2$. Observe that each boundary component of $A, A_1,$ and $A_2$ is a $(p,q)$-curve on $\partial \mathcal P_K$. Denote by $T_i$ the torus $A_i \cup A$, $i = 1,2$. Since $\mathcal P$ is atoroidal, each $T_i$ bounds a solid torus $V_i$ in $\mathcal P$. We will show that in fact each $V_i$ is contained in $\mathcal P_K$, giving a decomposition of $\mathcal P_K$ as $V_1\cup_A V_2$. Then $\mathcal P_K$ admits a Seifert fibering with two exceptional fibers, so $K$ is an exceptional fiber of $\mathcal P$. [ultra thick, red] (0,0) to[curve through=(-2,2)] (0,4); [ultra thick, purple] (0,4) to[curve through=(3,2)] (0,0); [ultra thick, blue] (0,4) to[curve through=(2,2)] (0,0); at (0,2) $\bullet$; at (0,1.5) $p/q$; at (-2.5, 2) $A_1$; at (1.5,2) $A_2$; at (3.5,2)$A$; at (0,3)$\nu(K)$; at (.5,5)$V_1$; The implied Seifert fibering of $V_1$ if $\nu(K) \subset V_1$. Note that $A$ is boundary compressible in $\mathcal P_K$. To see that $V_1$ is contained in $\mathcal P_K$, suppose to the contrary that $\nu(K)\subset V_1$. Then $A_2\subset V_1$. Consider the Seifert fibering of $V_1$ over $D^2$ induced by extending the fibering of $A_1$ by $(p,q)$-curves on $\partial \nu(K)$ over $T_1$. In this Seifert fibering, $A_2$ is isotopic to a union of regular fibers and $K$ appears as an exceptional fiber of type $p/q$ (see Figure 3). Since $V_1$ is a solid torus, $K$ is then the only exceptional fiber in this Seifert fibering. It follows that $A$ is isotopic to $A_2$ in $V_1\setminus \nu(K)$, and therefore $A$ is not essential in $\mathcal P_K$. By the same argument, we see that $V_2$ is also contained in $\mathcal P_K$, hence $\mathcal P_K = V_1 \cup_A V_2$ as desired. Furthermore, since $A$ is not boundary parallel in $\mathcal P_K$, the Seifert fibering of $\mathcal P_K$ induced by those on $V_1$ and $V_2$ has two exceptional fibers (see Figure 4). Equip $\nu(K)$ with the Seifert fibering induced by $(p,q)$-curves on $\partial \nu(K)$ to see that $K$ is an exceptional fiber of $\mathcal P = \mathcal P_K\cup_{A_1 \cup A_2}\nu(K)$, and that the $(p,q)$-cable of $K$ is a regular fiber. [ultra thick, red] (0,0) to[curve through=(-2,2)] (0,4); [ultra thick, purple] (0,4)–(0,0); [ultra thick, blue] (0,4) to[curve through=(2,2)] (0,0); at (-1,2) $\bullet$; at (1,2) $\bullet$; at (-2.5, 2) $A_1$; at (2.5,2) $A_2$; at (.35,3.15)$A$; at (0,4.5)$\mathcal P_K$; The Seifert fibering of $\mathcal P_K$ induced by the decomposition $\mathcal P_K = V_1\cup_AV_2$ has at least two exceptional fibers since $A$ is not parallel into $\partial \mathcal P_K$. It has at most two exceptional fibers since it is the union of the solid tori $V_1$ and $V_2$ along $A$. The annulus $A$ is a union of regular fibers in the induced Seifert fiberings of both $V_1$ and $V_2$. Let $p\geq 2g$ be an integral reducing slope for $K \subset \mathcal P$. Suppose that $K(p)$ is a connected sum of more than two lens spaces. Then $K$ is a regular fiber of $\mathcal P$. We again invoke Theorem 12 to identify an essential properly embedded annulus $A$ in $\mathcal P_K$ with boundary slope $p$. Since $p$ is integral, there is an annulus $A'\subset \nu(K)$ with $\partial A' = \partial A$ and $K \subset A'$. Let $T= A\cup A'$. Since $\mathcal P$ is atoroidal, it follows that $T$ bounds a solid torus. Denote the core of this solid torus by $E$, and observe that $K$ is the $(p,q)$-cable of $E$ for some $p,q\in \mathbb Z$ with $|q|\geq 2$ such that $E(p/q)$ is a connected sum of at least two lens spaces. That $K$ is a regular fiber of $\mathcal P$ then follows from Lemma 13. Let $K \subset \mathcal P$, and suppose that $K(p/q)$ is a connected sum of more than two lens spaces. By Proposition 11, $p/q> 2g(K)-1$. By Lemma 13, we conclude that $q=1$, since if $p/q>2g(K)-1$ is non-integral and $K(p/q)$ is reducible, then $K(p/q)$ is a connected sum of two lens spaces. By Lemma 14, we conclude that $K = F$. We may furthermore identify that $p/q =|H_1(L(2,1)\#L(3,2)\#L(5,4))|=30$. If we can identify a reducing sphere that meets the core of surgery on $K$ in two points, for $K$ satisfying the hypotheses of Conjecture 5, the then topological arguments in this section can be adapted to prove the conjecture with help from the classification of embedded tori in Seifert fibered spaces <cit.>. For a Seifert fibered space over $S^2$ that is not an integer homology sphere L-space, i.e. not $S^3$ or $\pm \mathcal P$ [3], parts of our argument break down. For example, if $K$ is non-trivial in homology then there need not be a slope on $\partial \nu(K)$ bounding an orientable surface in the exterior of $K$. Even if $K$ is nullhomologous, a positive L-space slope on $K$ need not be bounded below by $2g(K)-1$—in fact, every knot $K$ that is doubly primitive in a Brieskorn sphere that is not an L-space admits an integral surgery slope $p$ to a lens space, and it follows from <cit.> that $|p|\leq 2g(K)-1$. [1] K. L. Baker: A cabling conjecture for knots in lens spaces, Bol. Soc. Mat. Mex. (3) 20 (2014), no. 2, 449–465. [2]M. Boileau and J.-P. Otal: Scindements de Heegaard et groupe des homéotopies des petites variétés de Seifert, Invent. Math. 106 (1991), no. 1, 85–107. [3]E. Eftekhary: Seifert fibered homology spheres withtrivial Heegaard Floer homology, 2009, available at <https://arxiv.org/abs/0909.3975>. [4] K. Frøyshov: The Seiberg-Witten equations and four-manifolds with boundary, Math. Res. Lett. 3 (1996), no. 3, 373–390. [5] F. González-Acuña and H. Short: Knot surgery and primeness, Math. Proc. Cambridge Philos. Soc. 99 (1986), no. 1, 89–102. [6]C. McA. Gordon and J. Luecke: Knots are determined by their complements, J. Amer. Math. Soc. 2 (1989), no. 2, 371–415. [7]J. E. Greene: The lens space realization problem, Ann. of Math. (2) 177 (2013), no. 2, 449–511. [8]J. E. Greene: L-space surgeries, genus bounds, and the cabling conjecture, J. Differential Geom. 100 (2015), no. 3, 491–506. [9] J. A. Hoffman: There are no strict great $x$-cycles after a reducing or $P^2$ surgery on a knot, J. Knot Theory Ramifications 7 (1998), no. 5, 549–569. [10] W. Jaco: Lectures on three-manifold topology, American Mathematical Society, Providence, R.I. (1980). [11] R. Kirby: Problems in low-dimensional topology (2010). <math.berkeley.edu/ kirby/problems.ps.gz>. [12] W. B. R. Lickorish: A representation of orientable combinatorial $3$-manifolds, Ann. of Math. (2) 76 (1962), 531–540. [13] D. Matignon and N. Sayari: Longitudinal slope and Dehn fillings, Hiroshima Math. J. 33 (2003), no. 1, 127–136. [14] L. Moser: Elementary surgery along a torus knot, Pacific J. Math. 38 (1971), 737–745. [15]P. Ozsváth and Z. Szabó: Absolutely graded Floer homologies and intersection forms for four-manifolds with boundary, Adv. Math. 173 (2003), no. 2, 179–261. [16]P. Ozsváth and Z. Szabó: Holomorphic disks and genus bounds, Geom. Topol. 8 (2004), 311–334. [17]P. Ozsváth and Z. Szabó: On knot Floer homology and lens space surgeries, Topology 44 (2005), no. 6, 1281–1300. [18]J. Rasmussen: Lens space surgeries and L-space homology spheres, 2007, available at <https://arxiv.org/abs/0710.2531>. [19]N. Saveliev: Lectures on the topology of 3-manifolds, 2nd edition, Walter de Gruyter & Co., Berlin (2012). [20]M. Tange: Lens spaces given from $L$-space homology 3-spheres, Experiment. Math. 18 (2009), no. 3, 285–301. [21]A. H. Wallace: Modifications and cobounding manifolds, Canadian J. Math. 12 (1960), 503–528.
4k
arxiv_papers
2101.01258
# Spontaneous fractional Chern insulators in transition metal dichalcogenides Moiré superlattices Heqiu Li Department of Physics, University of Michigan, Ann Arbor, Michigan 48109, USA Umesh Kumar Theoretical Division, T-4, Los Alamos National Laboratory, Los Alamos, New Mexico 87545, USA Kai Sun Department of Physics, University of Michigan, Ann Arbor, Michigan 48109, USA Shi-Zeng Lin Theoretical Division, T-4 and CNLS, Los Alamos National Laboratory, Los Alamos, New Mexico 87545, USA ###### Abstract Moiré superlattice realized in two-dimensional heterostructures offers an exciting platform to access strongly-correlated electronic states. In this work, we study transition metal dichalcogenides (TMD) Moiré superlattices with time-reversal symmetry and nontrivial spin/valley-Chern numbers. Utilizing realistic material parameters and the method of exact diagonalization, we find that at certain twisting angle and fractional filling, gapped fractional topological states, i.e., fractional Chern insulators, are naturally stabilized by simply introducing the Coulomb repulsion. In contrast to fractional quantum Hall systems, where the time-reversal symmetry has to be broken explicitly, these fractional states break the time-reversal symmetry spontaneously. We show that the Chern number contrasting in the opposite valleys imposes a strong constraint on the nature of fractional Chern insulator and the associated low energy excitations. _Introduction._ — When two layers of two-dimensional materials are placed atop of each other with slight misalignment, it creates a superlattice with periodicity much larger than the atomic lattice parameter. Because of the large lattice periodicity, one can fill or empty the entire band by electrode gating. This Moiré superlattice provides a tunable platform to control the electronic band structure Lopes dos Santos _et al._ (2007); Bistritzer and MacDonald (2011), and therefore enables access to a plethora of interesting quantum states. Because the band width in these systems can be tuned to be extremely narrow Bistritzer and MacDonald (2011), these Moiré superlattices open up a new pathway to stabilize various strongly-correlated phases such as superconductivity and correlated insulators Cao _et al._ (2018a, b); Lu _et al._ (2019); Yankowitz _et al._ (2019); Kerelsky _et al._ (2019); Cao _et al._ (2019); Polshyn _et al._ (2019); Xie _et al._ (2019); Jiang _et al._ (2019); Choi _et al._ (2019); Zondiner _et al._ (2020); Wong _et al._ (2020); Nuckolls _et al._ (2020); He _et al._ (2020); Liu _et al._ (2020); Regan _et al._ (2020); Wang _et al._ (2020); Xie and MacDonald (2020); Wu and Das Sarma (2020); Su and Lin (2020); Padhi _et al._ (2018, 2020); Padhi and Phillips (2019); Stefanidis and Sodemann (2020); Bultinck _et al._ (2020a). Furthermore, such electronic band structure can also be topologically nontrivial, e.g., with a nonzero integer Chern number Zhang _et al._ (2019); Repellin _et al._ (2020); Sharpe _et al._ (2019); Serlin _et al._ (2019); Chen _et al._ (2019). Combined with their strong coupling nature, such Moiré superlattices offer a promising route to realize the long-sought fractionalized topological order Ledwith _et al._ (2020); Repellin and Senthil (2020); Abouelkomsan _et al._ (2020); Liu _et al._ (2021); Wilhelm _et al._ (2020); Sohal _et al._ (2018); Sohal and Fradkin (2020). Recently, gapped electronic states at various fractional fillings (e.g., $1/3$) were observed in transition metal dichalcogenide (TMD) Moiré superlattices, e.g., $\mathrm{WSe_{2}/WS_{2}}$ Regan _et al._ (2020); Xu _et al._ (2020); Jin _et al._ (2020); Zhou _et al._ (2020); Huang _et al._ (2020). In general, gapped electronic states at fractional filling may have two origins: (a) charge order that spontaneously breaks the translational symmetry and (b) fractional topological order, e.g., fractional Chern insulators (FCI) Tang _et al._ (2011); Sun _et al._ (2011); Neupert _et al._ (2011); Regnault and Bernevig (2011); Sheng _et al._ (2011); Parameswaran _et al._ (2013); Bergholtz and Liu (2013); Wu _et al._ (2012). In these TMD Moiré superlattices, the observed gapped states were interpreted as Wigner crystals of electrons, because the underlying single-particle bands are topologically trivial Wu _et al._ (2018). Encouraged by such exciting experimental progress, here we explore the feasibility of the second category in TMD Moiré superlattices. In particular, we focus on systems like $\mathrm{MoTe_{2}}$, which may host topologically nontrivial bands with non-zero spin/valley-Chern numbers Wu _et al._ (2019). In contrast to a partially filled Chern band Tang _et al._ (2011); Sun _et al._ (2011); Neupert _et al._ (2011); Regnault and Bernevig (2011); Sheng _et al._ (2011); Parameswaran _et al._ (2013); Bergholtz and Liu (2013); Wu _et al._ (2012), because these systems preserve the time-reversal symmetry, two types of fractional states are in principle allowed (a) time-reversal invariant fractional topological insulators Levin and Stern (2009) and (b) FCIs via spontaneously breaking the time-reversal symmetry. The key focus of this study is whether Coulomb repulsion could stabilize some of these fractional states in TMD Moiré superlattices. In this work, we show that by simply increasing the Coulomb interaction strength in such TMD Moiré superlattices, the system undergoes a quantum phase transition that spontaneously breaks the time-reversal symmetry by polarizing electrons into one of the two valleys. Further increase of Coulomb interaction will trigger a second quantum phase transition, and stabilize a FCI at a fractional filling. For excitations, our numerical studies observe both (intravalley) fractional excitations from the fractional topological order and (intervalley) valley-wave excitations from the spontaneous symmetry breaking. We argue that the symmetry breaking state and low-energy excitations are constrained by the valley contrasting Chern number in TMD Moiré superlattice. _Model._ — We consider twisted homobilayer TMD materials. For each single layer, the low energy electronic states reside at the valence band maxima at $\pm\bm{K}$ valleys. Contrary to bilayer graphene systems where the valley and spin degrees of freedom are both present, in TMD each valley in the top valence band has fixed spin orientation due to strong spin-orbit coupling and the broken inversion symmetry Xiao _et al._ (2012). With a small twist angle $\theta$ between two layers, the $+\bm{K}$ valley for the top and bottom layers are shifted to $\bm{K}_{t}$ and $\bm{K}_{b}$ in the Moiré Brillouin zone (MBZ) respectively, [Fig. 1(b)]. For convenience we choose the rhombus- shaped MBZ and set the point $\bm{M}=(\bm{K}_{t}+\bm{K}_{b})/2$ as the origin. We employ the continuum model Bistritzer and MacDonald (2011) in which the Moiré Hamiltonian for the $+\bm{K}$ valley is: $\displaystyle H_{+}(\bm{k},\bm{r})=\left(\begin{array}[]{cc}-\frac{\hbar^{2}\left(\bm{k}-\bm{K}_{b}\right)^{2}}{2m^{*}}+\Delta_{\mathfrak{b}}(\bm{r})&\Delta_{T}(\bm{r})\\\ \Delta_{T}^{\dagger}(\bm{r})&-\frac{\hbar^{2}\left(\bm{k}-\bm{K}_{t}\right)^{2}}{2m^{*}}+\Delta_{\mathrm{t}}(\bm{r})\end{array}\right)$ (3) Here $m^{*}$ is the effective mass. The form of Moiré potential, $\Delta_{b,t,T}$, is dictated by the $D_{3}$ crystalline symmetry and a combination of $C_{2z}$ rotation followed by switching the two layers, and can be parameterized by Wu _et al._ (2019): $\displaystyle\Delta_{T}(\bm{r})$ $\displaystyle=$ $\displaystyle w\left(1+e^{-i\bm{G}_{2}\cdot\bm{r}}+e^{-i\bm{G}_{3}\cdot\bm{r}}\right)$ $\displaystyle\Delta_{l}(\bm{r})$ $\displaystyle=$ $\displaystyle 2w_{z}\sum_{j=1,3,5}\cos\left(\bm{G}_{j}\cdot\bm{r}+l\psi\right),$ (4) where $l\in\\{b,t\\}=\\{+1,-1\\}$ and $\bm{G}_{j}$ is the Moiré reciprocal lattice vectors with length $|\bm{G}_{j}|=\frac{4\pi}{\sqrt{3}a_{M}}$ and polar angle $\frac{\pi(j-1)}{3}$. Here $a_{M}=a_{0}/\theta$ is the Moiré lattice constant for a small twisted angle $\theta$ and $a_{0}$ is the lattice parameter of TMD. The Hamiltonian for the valley $-\bm{K}$ can be obtained by the time-reversal symmetry $H_{-}(\bm{k},\bm{r})=H_{+}(-\bm{k},\bm{r})^{*}$. To be specific, we focus on twisted MoTe2 homobilayer with typical parameters $({\hbar^{2}}/{2m^{*}a_{0}^{2}},\ w_{z},\ w,\ \psi)=(495\ \mathrm{meV},\ 8\ \mathrm{meV},\ -8.5\ \mathrm{meV},\ -89.6^{\circ})$ Wu _et al._ (2019). The top valence band of a TMD single layer splits into multiple Moiré bands due to the Moiré potential. As shown in Fig. 1(c) and (d), when the twist angle is close to $\theta_{0}=1.38^{\circ}$, the top Moiré band becomes nearly flat. The flatness of a band can be characterized by a ratio of the gap between the nearest bands to its band width. For the top Moiré band, the ratio can be as large as $13$. When $\theta<3.1^{\circ}$, The top Moiré band is topological characterized by a valley/spin Chern number $C=\pm 1$ due to the skyrmion lattice pseudo spin textures of the Moiré potential Wu _et al._ (2019). The Chern number for the opposite valley/spin is opposite as required by time-reversal symmetry. Thus, at the single-particle level, such TMD homobilayer realizes a quantum valley/spin Hall insulator. We then introduce screened Coulomb interaction and project it to the nearly flat top Moiré band Lee _et al._ (2019): $\displaystyle H_{\mathrm{int}}$ $\displaystyle=$ $\displaystyle\frac{1}{2A}\sum_{\bm{q}}:\rho(\bm{q})V(\bm{q})\rho(-\bm{q}):$ (5) $\displaystyle=$ $\displaystyle\sum_{\bm{k},\bm{k}^{\prime},\bm{q},\tau,\tau^{\prime}}\frac{U}{2N_{\mathrm{cell}}}v(\bm{q})\lambda_{\tau,\bm{q}}(\bm{k})\lambda_{\tau^{\prime},\bm{q}}(\bm{k}^{\prime})^{*}$ $\displaystyle C^{\dagger}_{\tau}(\bm{k})C^{\dagger}_{\tau^{\prime}}(\bm{k}^{\prime}+\bm{q})C_{\tau^{\prime}}(\bm{k}^{\prime})C_{\tau}(\bm{k}+\bm{q}),$ where $\tau=\pm$ is the valley index, and $\lambda_{\tau,\bm{q}}(\bm{k})=\langle u_{\tau,\bm{k}}\lvert u_{\tau,\bm{k}+\bm{q}}\rangle$ is the form factor originated from the projection. Here $v(\bm{q})={4\pi\tanh(qd)}/{\sqrt{3}qa_{M}}$ is the dimensionless screened Coulomb potential with $d$ the separation between the electrode and Moiré superlattice, which is set to $d=2a_{M}$ in the calculations. $A$ is system area and $N_{\mathrm{cell}}$ is the number of the unit cells in the calculations. The coefficient of $v(\bm{q})$ is chosen to make $U$ equal to the bare Coulomb potential between two particles separated by $a_{M}$. $C_{\tau}(\bm{k})$ is the annihilation operator for single particle state $\lvert u_{\tau,\bm{k}}\rangle$. We neglect the weak intervalley impurity scattering process associated with a large momentum transfer, and therefore the Hamiltonian also has a valley $U(1)_{v}$ symmetry. In this model, there are two competing symmetry breaking states: an intervalley coherent state that breaks the valley $U(1)_{v}$ symmetry and an valley/spin polarized state that breaks the time reversal symmetry. At half filling of the topmost band (account for valley degree of freedom), our Hartree-Fock analysis and exact-diagonalization results both suggest that a valley-polarized state is energetically favored, which spontaneously breaks the time-reversal symmetry and leads to a interaction-induced Chern insulator sup . At fractional filling, in principle, two types of fractional topological states might emerge, a fractional Chern insulator or a fractional topological insulator Levin and Stern (2009); Stern (2016), depending on whether the time- reversal symmetry is spontaneously broken or preserved 111One may think of another possibility, analogous to the Halperin $(m_{1},\ m_{2},\ n_{1})$ states. However, this state is not favored because of the opposite Chern number in the opposite valley. sup , and our exact diagonalization below show that the FCI is favored and stabilized in our system. Figure 1: (a): Schematic view of the Moiré superlattice. (b): We choose the Moiré Brillouin zone (MBZ) to be the rhombus and the origin in momentum space is chosen at $M$. (c): Moiré band structure at $\theta=1.38^{\circ}$. The top Moiré band is nearly flat with Chern number $\pm 1$. (d): The gap ratio $\frac{\Delta_{12}}{W}=\frac{\min(E_{1}(\bm{k}))-\max(E_{2}(\bm{k}))}{\max(E_{1}(\bm{k}))-\min(E_{1}(\bm{k}))}$ as a function of twisted angle $\theta$, where $E_{1}(\bm{k})$ ($E_{2}(\bm{k})$) is the energy of the first (second) topmost Moiré band. _Valley polarized FCI._ — We define the filling factor $\nu=2\rho_{e}/\rho_{s}$, where $\rho_{e}$ is the electron density occupying the top Moiré band and $\rho_{s}$ is the electron density for the full filling of the two-fold degenerate top Moiré band. The factor 2 accounts for the valley degree of freedom. Using exact diagonalization, at $\nu=1/3$ we observe numerical evidence of spontaneous valley polarization and FCI in the strong interaction limit, as shown in Fig. 2(a). For 8 electrons in $4\times 6$ unit cells ($4\times 6\times 2$ single-particle states including both valleys), the ground states are fully valley polarized with three nearly-degenerate ground states for each valley polarization, separated from the excited states by an energy gap of the order of $2$ K. We calculated the many-body Chern number of each ground state Sheng _et al._ (2011), and the topological index is found to be $1/3$, characterizing a $1/3$ FCI phase. This conclusion is further supported by the total momentum for each ground state, which obeys the generalized Pauli exclusion rule Bernevig and Regnault (2012). The occupation number $n(k_{1},k_{2})$ of single particle states for each of the three many-body ground states are plotted in Fig. 2(b). $n(k_{1},k_{2})$ is uniformly distributed for different single particle states, consistent with the fact that the ground state is an incompressible liquid. The spectrum evolution under flux insertion along $k_{2}$ direction is shown in Fig. 2(c). The excitation gap is maintained throughout the flux insertion process. Figure 2: Numerical diagonalization results for 8 particles in $4\times 6$ Moiré lattice. We choose $\theta=1.38^{\circ}$ and $U=1.38\ \mathrm{meV}$. The bandwidth at this twist angle is $W=0.083\ \mathrm{meV}$. Here $N=N_{1}\times N_{2}/3$ is the number of particles. (a): Energy spectrum with three nearly degenerate ground states in each valley. (b): The occupation number of single particle states $n(k_{1},k_{2})$ for each of the three many-body ground states. The nearly uniform distribution of $n(k_{1},k_{2})$ suggests the ground state is an incompressible liquid. (c): Under flux insertion along $k_{2}$ direction, the ground states evolve into each other. (d): Particle entanglement spectrum (PES) for the separation of $N_{A}=4$ particles. The topological nature of the ground states are further confirmed by our calculation of the particle entanglement spectrum (PES) Bernevig and Regnault (2012). To compute PES, we divide the $N$ particles into two collections of $N_{A}$ and $N_{B}=N-N_{A}$ particles and trace out $N_{B}$ particles to get the reduced density matrix $\rho_{A}$. The PES levels $\xi$ are obtained from the logarithm of eigenvalues of $\rho_{A}$, and are labeled by the total momentum of the remaining $N_{A}$ particles, as shown in Fig. 2(d). There is a clear entanglement gap with 2730 levels below the gap for $N_{A}=4$, consistent with the counting of quasihole excitation in the $\nu=1/3$ FCI Bernevig and Regnault (2012). To exam the finite-size effect, we study the scaling of the many-body gap $\Delta$ with various system sizes. For a genuine FCI, $\Delta$ remain finite in the thermodynamic limit when both $N_{1}$ and $N_{2}$ approach infinity. However $\Delta$ should vanish if only one of $N_{1}$ or $N_{2}$ approaches infinity, because this limit is a one-dimensional system which should not support FCI Regnault and Bernevig (2011). This is confirmed in Fig. 3(a), which shows $\Delta$ decreases when $N_{1}$ is fixed at 3 and $N_{2}$ increases from 4 to 8, but $\Delta$ increases when the system size changes from $3\times 8$ to $4\times 6$. Figure 3: (a): The many-body gap $\Delta$ for various system sizes at $v=1/3$ filling. The interaction strength is fixed to be $U=1.38\ \mathrm{meV}$. The increase of $\Delta$ in $4\times 6$ system suggests the gap persists in the two-dimensional thermodynamic limit. (b): The phase diagram for Fermi liquid (FL), FL with valley polarization (VP) and fractional Chern insulator (FCI) at different interaction strength $U(\epsilon)=\frac{e^{2}}{4\pi\epsilon\epsilon_{0}a_{M}}$ and twisted angle $\theta$. The dashed line corresponds to $U(\epsilon)=\Delta_{12}$, above which the interaction starts to mix different bands and the single-band approximation breaks down. We then map out the phase diagram at $\nu=1/3$ filling as a function of the interaction strength $U$ which can be controlled by distance between the electrodes and the Moiré superlattice in experiments and the dielectric constant $\epsilon$. The results are shown in Fig. 3(b). We find a valley non- polarized Fermi liquid at a small $1/\epsilon$ corresponding to a small $U$, a Fermi liquid with valley polarization at an intermediate interaction and the FCI phase with valley polarization at strong interaction. Depending on the twisted angle $\theta$, which controls the bandwidth, the valley non-polarized Fermi liquid can transit directly to FCI with valley polarization or through an intermediate Fermi liquid with valley polarization. The phase transition between the valley-polarized Fermi liquid and FCI can be described by Ginzburg-Landau theory with a Chern-Simons term sup . The direct transition occurs near $\theta=1.38^{\circ}$ where the single particle Moiré band has the largest gap to bandwidth ratio [see Fig. 1(d)]. This is consistent with the quantum Hall systems with flat Landau levels, where the interaction stabilizes simultaneously the fractional quantum Hall state with spin polarization. Note that the FCI can be stabilized in a relatively broader region of the twisted angle here compared to that in magic angle twisted bilayer graphene Ledwith _et al._ (2020); Repellin and Senthil (2020); Abouelkomsan _et al._ (2020); Liu _et al._ (2021); Wilhelm _et al._ (2020); Sohal _et al._ (2018); Sohal and Fradkin (2020), and the region of angle for FCI increases with interaction. For interaction above the dashed line in Fig. 3(b), our single- band approximation used in the numerical calculations breaks down and it requires to take other nearby bands into account. _Excitations._ — Here we study the charge neutral excitations above the FCI ground states. As a consequence of the spontaneous valley polarization, we consider the valley waves excitation $\lvert\Psi_{v}(\bm{q})\rangle=\sum_{\bm{k}}z_{\bm{k}}C_{+}^{\dagger}(\bm{k}+\bm{q})C_{-}(\bm{k})\lvert\Psi_{-}\rangle$, where $\lvert\Psi_{-}\rangle$ is the FCI ground state with $\tau=-$ valley fully occupied and $z_{\bm{k}}$ is variational parameter. The presence of the form factor in Eq. (5) breaks the valley pseudospin $\mathrm{SU(2)}$ rotation system down to the valley $U(1)_{v}$ symmetry. As a result, the valley wave excitation are gapped as shown in Fig. 4, which can be fitted by $E_{w}(\bm{q})=Jq^{2}+A$. The valley wave disperses weakly in momentum and thus is well localized in real space. The lowest intravalley many-body excitation has lower energy than valley wave excitation for the parameters we used, i.e., the energy difference between the lowest fully-polarized excited state and the FCI state is $E_{mb}=0.167\ \mathrm{meV}<E_{w}$, see Fig. 4. Nevertheless, the valley wave excitation remains a stable excitation because the decay of the valley wave to the intravalley many-body excitations are forbidden. Intravalley many-body excitations has valley quantum number 0, while the valley wave has valley quantum number 2. In quantum Hall ferromagnets, a pair of skyrmions has lower energy than the particle-hole bound state Sondhi _et al._ (1993). The system size limitation in the exact diagonalization does not allow us to study the valley skyrmion excitation in our numerical calculations. Here we use an effective Hamiltonian density for valley pseudospin $\mathbf{n}(r)$ Sondhi _et al._ (1993) $\displaystyle H_{n}(r)=\frac{J}{2}(\nabla\mathbf{n})^{2}-\frac{A}{2}n_{z}^{2}+\frac{1}{2}\int dr^{\prime 2}V(r-r^{\prime})\rho_{s}(r)\rho_{s}(r^{\prime}),$ (6) where $J$ and $A$ are given by the valley wave spectrum. The presence of the valley pseudospin anisotropy can be traced back to the opposite Chern number for the opposite valley. One cannot rotate $\mathbf{n}$ from one valley to opposite valley adiabatically without closing the energy gap, which implies the existence of anisotropy for $\mathbf{n}$. The last term accounts for the Coulomb interaction $V(r-r^{\prime})$, because a skyrmion is dressed with charge distribution $\rho_{s}(r)=\epsilon^{ij}\epsilon_{abc}n^{a}\partial_{i}n^{b}\partial_{j}n^{c}/8\pi$, where $\epsilon_{abc}(\epsilon^{ij})$ is the Levi-Civita tensor with $i,j$ being the space index and $a,b,c$ being the spin index. The skyrmion topological charge $Q_{s}=\int dr^{2}\rho_{s}(r)$ is quantized to an integer number. The easy axis anisotropy favors skyrmions with a small radius while the Coulomb repulsion favors skyrmions with a large radius. Their competition determines the skyrmion size Chatterjee _et al._ (2020). Figure 4: Dispersion of valley wave excitation $E_{w}(\bm{q})$ for 8 particles in $4\times 6$ lattice. Excitation above the ground state with total momentum $k_{1}=0,k_{2}=0$ ($k_{1}=0,k_{2}=2$) are labeled by the squares and circles respectively. The slight energy difference for these two ground states is caused by finite size effect. The inset compares the energy of the lowest valley wave excitation $E_{w}$, the lowest intravalley many-body excitation $E_{mb}$ and the ground state energy $E_{g}$. _FCI at $v=2/5$._— In twisted graphene Moiré superlattices, the Halperin $(332)$ state is stabilized at $v=2/5$ due to the remaining SU(2) spin rotation symmetry in the valley polarized state Liu _et al._ (2021). In our TMD Moiré superlattices, the spin rotation symmetry is absent because of the spin-valley locking. The Chern number contrasting valley degree of freedom disfavors the $(332)$ state. To demonstrate this explicitly, we calculate the energy spectrum, spectrum flow under flux insertion and entanglement spectrum at $v=2/5$, and the results are displayed in Fig. 5. The 5-fold degenerate ground states are valley polarized, and are consistent with the $v=2/5$ FCI state. In the fractional quantum Hall, the $v=2/5$ state belongs to the second hierarchical Jain state, and similarly one can assign the $v=2/5$ FCI as a second hierarchical FCI. Our results highlight the importance of symmetry in dictating the ground state and contrast the difference between the graphene and TMD Moire superlattices. Figure 5: FCI at $v=2/5$. (a): the energy spectrum of 8 particles in 4*5 system, and (b): 6 particles in 3*5 system. (c): the flux insertion for the system in (a), where the five ground states are marked in red (some of them are on top of each other). A finite gap remains during flux insertion. (d): the particle entanglement spectrum for (a) with $N_{A}=3$. There are $51\times 20=1020$ states below the dashed line, consistent with quasihole counting. _Discussions._ — We show that TMD Moiré superlattices can host fractional topological states via spontaneously breaking the time-reversal symmetry, using realistic parameters of TMD Moiré superlattices. Comparing with graphene, the spin-valley locking in TMD materials breaks the SU(2) spin rotation symmetry and eliminates the spin wave Goldstone modes, which could help stabilize the FCI states. The valley contrasting Chern number in TMD Moiré superlattices also dictates the symmetry breaking states, hence the nature of fractionalized topological states, and also the low energy excitations in the FCI. The gapped nature of these states can be detected by transport, optical measurements etc, and its topological nature can be accessed by Hall conductivity measurement. Due to the strong analogy between FCI and chiral spin liquids, it is plausible that Moiré superlattices may also help realize/stabilize exotic spin liquid phases Kumar _et al._ (2014, 2015); Zhu _et al._ (2016) by unitizing the valley/layer pseudospin or real spin degrees of freedom. ###### Acknowledgements. _Acknowledgements._ — This work done at LANL was carried out under the auspices of the U.S. DOE NNSA under contract No. 89233218CNA000001 through the LDRD Program. S. Z. L. was also supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, Materials Sciences and Engineering Division, Condensed Matter Theory Program. H.L. and K.S. acknowledge support through NSF Grant No. NSF-EFMA-1741618. ## References * Lopes dos Santos _et al._ (2007) J. M. B. Lopes dos Santos, N. M. R. Peres, and A. H. Castro Neto, “Graphene bilayer with a twist: Electronic structure,” Phys. Rev. Lett. 99, 256802 (2007). * Bistritzer and MacDonald (2011) Rafi Bistritzer and Allan H. MacDonald, “Moiré bands in twisted double-layer graphene,” PNAS 108, 12233–12237 (2011). * Cao _et al._ (2018a) Yuan Cao, Valla Fatemi, Ahmet Demir, Shiang Fang, Spencer L. Tomarken, Jason Y. Luo, Javier D. Sanchez-Yamagishi, Kenji Watanabe, Takashi Taniguchi, Efthimios Kaxiras, Ray C. Ashoori, and Pablo Jarillo-Herrero, “Correlated insulator behaviour at half-filling in magic-angle graphene superlattices,” Nature 556, 80–84 (2018a). * Cao _et al._ (2018b) Yuan Cao, Valla Fatemi, Shiang Fang, Kenji Watanabe, Takashi Taniguchi, Efthimios Kaxiras, and Pablo Jarillo-Herrero, “Unconventional superconductivity in magic-angle graphene superlattices,” Nature 556, 43–50 (2018b). * Lu _et al._ (2019) Xiaobo Lu, Petr Stepanov, Wei Yang, Ming Xie, Mohammed Ali Aamir, Ipsita Das, Carles Urgell, Kenji Watanabe, Takashi Taniguchi, Guangyu Zhang, Adrian Bachtold, Allan H. MacDonald, and Dmitri K. Efetov, “Superconductors, orbital magnets and correlated states in magic-angle bilayer graphene,” Nature 574, 653–657 (2019). * Yankowitz _et al._ (2019) Matthew Yankowitz, Shaowen Chen, Hryhoriy Polshyn, Yuxuan Zhang, K. Watanabe, T. Taniguchi, David Graf, Andrea F. Young, and Cory R. Dean, “Tuning superconductivity in twisted bilayer graphene,” Science 363, 1059–1064 (2019). * Kerelsky _et al._ (2019) Alexander Kerelsky, Leo J McGilly, Dante M Kennes, Lede Xian, Matthew Yankowitz, Shaowen Chen, K Watanabe, T Taniguchi, James Hone, Cory Dean, _et al._ , “Maximized electron interactions at the magic angle in twisted bilayer graphene,” Nature 572, 95–100 (2019). * Cao _et al._ (2019) Yuan Cao, Debanjan Chowdhury, Daniel Rodan-Legrain, Oriol Rubies-Bigordà, Kenji Watanabe, Takashi Taniguchi, T Senthil, and Pablo Jarillo-Herrero, “Strange metal in magic-angle graphene with near planckian dissipation,” arXiv preprint arXiv:1901.03710 (2019). * Polshyn _et al._ (2019) Hryhoriy Polshyn, Matthew Yankowitz, Shaowen Chen, Yuxuan Zhang, K Watanabe, T Taniguchi, Cory R Dean, and Andrea F Young, “Large linear-in-temperature resistivity in twisted bilayer graphene,” Nature Physics 15, 1011–1016 (2019). * Xie _et al._ (2019) Yonglong Xie, Biao Lian, Berthold Jäck, Xiaomeng Liu, Cheng-Li Chiu, Kenji Watanabe, Takashi Taniguchi, B Andrei Bernevig, and Ali Yazdani, “Spectroscopic signatures of many-body correlations in magic-angle twisted bilayer graphene,” Nature 572, 101–105 (2019). * Jiang _et al._ (2019) Yuhang Jiang, Xinyuan Lai, Kenji Watanabe, Takashi Taniguchi, Kristjan Haule, Jinhai Mao, and Eva Y Andrei, “Charge order and broken rotational symmetry in magic-angle twisted bilayer graphene,” Nature 573, 91–95 (2019). * Choi _et al._ (2019) Youngjoon Choi, Jeannette Kemmer, Yang Peng, Alex Thomson, Harpreet Arora, Robert Polski, Yiran Zhang, Hechen Ren, Jason Alicea, Gil Refael, _et al._ , “Electronic correlations in twisted bilayer graphene near the magic angle,” Nature Physics 15, 1174–1180 (2019). * Zondiner _et al._ (2020) U. Zondiner, A. Rozen, D. Rodan-Legrain, Y. Cao, R. Queiroz, T. Taniguchi, K. Watanabe, Y. Oreg, F. von Oppen, Ady Stern, E. Berg, P. Jarillo-Herrero, and S. Ilani, “Cascade of phase transitions and Dirac revivals in magic-angle graphene,” Nature 582, 203–208 (2020), number: 7811 Publisher: Nature Publishing Group. * Wong _et al._ (2020) Dillon Wong, Kevin P. Nuckolls, Myungchul Oh, Biao Lian, Yonglong Xie, Sangjun Jeon, Kenji Watanabe, Takashi Taniguchi, B. Andrei Bernevig, and Ali Yazdani, “Cascade of electronic transitions in magic-angle twisted bilayer graphene,” Nature 582, 198–202 (2020), number: 7811 Publisher: Nature Publishing Group. * Nuckolls _et al._ (2020) Kevin P. Nuckolls, Myungchul Oh, Dillon Wong, Biao Lian, Kenji Watanabe, Takashi Taniguchi, B. Andrei Bernevig, and Ali Yazdani, “Strongly correlated Chern insulators in magic-angle twisted bilayer graphene,” Nature 588, 610–615 (2020), number: 7839 Publisher: Nature Publishing Group. * He _et al._ (2020) Minhao He, Yuhao Li, Jiaqi Cai, Yang Liu, K. Watanabe, T. Taniguchi, Xiaodong Xu, and Matthew Yankowitz, “Symmetry breaking in twisted double bilayer graphene,” Nature Physics , 1–5 (2020), publisher: Nature Publishing Group. * Liu _et al._ (2020) Xiaomeng Liu, Zeyu Hao, Eslam Khalaf, Jong Yeon Lee, Yuval Ronen, Hyobin Yoo, Danial Haei Najafabadi, Kenji Watanabe, Takashi Taniguchi, Ashvin Vishwanath, and Philip Kim, “Tunable spin-polarized correlated states in twisted double bilayer graphene,” Nature 583, 221–225 (2020), number: 7815 Publisher: Nature Publishing Group. * Regan _et al._ (2020) Emma C. Regan, Danqing Wang, Chenhao Jin, M. Iqbal Bakti Utama, Beini Gao, Xin Wei, Sihan Zhao, Wenyu Zhao, Zuocheng Zhang, Kentaro Yumigeta, Mark Blei, Johan D. Carlström, Kenji Watanabe, Takashi Taniguchi, Sefaattin Tongay, Michael Crommie, Alex Zettl, and Feng Wang, “Mott and generalized Wigner crystal states in WSe 2 /WS 2 moiré superlattices,” Nature 579, 359–363 (2020), number: 7799 Publisher: Nature Publishing Group. * Wang _et al._ (2020) Lei Wang, En-Min Shih, Augusto Ghiotto, Lede Xian, Daniel A. Rhodes, Cheng Tan, Martin Claassen, Dante M. Kennes, Yusong Bai, Bumho Kim, Kenji Watanabe, Takashi Taniguchi, Xiaoyang Zhu, James Hone, Angel Rubio, Abhay N. Pasupathy, and Cory R. Dean, “Correlated electronic phases in twisted bilayer transition metal dichalcogenides,” Nature Materials 19, 861–866 (2020), number: 8 Publisher: Nature Publishing Group. * Xie and MacDonald (2020) Ming Xie and A. H. MacDonald, “Nature of the correlated insulator states in twisted bilayer graphene,” Phys. Rev. Lett. 124, 097601 (2020). * Wu and Das Sarma (2020) Fengcheng Wu and Sankar Das Sarma, “Collective excitations of quantum anomalous hall ferromagnets in twisted bilayer graphene,” Phys. Rev. Lett. 124, 046403 (2020). * Su and Lin (2020) Ying Su and Shi-Zeng Lin, “Current-induced reversal of anomalous hall conductance in twisted bilayer graphene,” Phys. Rev. Lett. 125, 226401 (2020). * Padhi _et al._ (2018) Bikash Padhi, Chandan Setty, and Philip W. Phillips, “Doped twisted bilayer graphene near magic angles: Proximity to wigner crystallization, not mott insulation,” Nano Letters 18, 6175–6180 (2018), pMID: 30185049, https://doi.org/10.1021/acs.nanolett.8b02033 . * Padhi _et al._ (2020) Bikash Padhi, R. Chitra, and Philip W. Phillips, “Generalized wigner crystallization in moiré materials,” (2020), arXiv:2009.13536 [cond-mat.str-el] . * Padhi and Phillips (2019) Bikash Padhi and Philip W. Phillips, “Pressure-induced metal-insulator transition in twisted bilayer graphene,” Phys. Rev. B 99, 205141 (2019). * Stefanidis and Sodemann (2020) Nikolaos Stefanidis and Inti Sodemann, “Excitonic laughlin states in ideal topological insulator flat bands and their possible presence in moiré superlattice materials,” Phys. Rev. B 102, 035158 (2020). * Bultinck _et al._ (2020a) Nick Bultinck, Shubhayu Chatterjee, and Michael P. Zaletel, “Mechanism for anomalous hall ferromagnetism in twisted bilayer graphene,” Phys. Rev. Lett. 124, 166601 (2020a). * Zhang _et al._ (2019) Ya-Hui Zhang, Dan Mao, Yuan Cao, Pablo Jarillo-Herrero, and T. Senthil, “Nearly flat chern bands in moiré superlattices,” Phys. Rev. B 99, 075127 (2019). * Repellin _et al._ (2020) Cécile Repellin, Zhihuan Dong, Ya-Hui Zhang, and T. Senthil, “Ferromagnetism in narrow bands of moiré superlattices,” Phys. Rev. Lett. 124, 187601 (2020). * Sharpe _et al._ (2019) Aaron L. Sharpe, Eli J. Fox, Arthur W. Barnard, Joe Finney, Kenji Watanabe, Takashi Taniguchi, M. A. Kastner, and David Goldhaber-Gordon, “Emergent ferromagnetism near three-quarters filling in twisted bilayer graphene,” Science 365, 605–608 (2019). * Serlin _et al._ (2019) M. Serlin, C. L. Tschirhart, H. Polshyn, Y. Zhang, J. Zhu, K. Watanabe, T. Taniguchi, L. Balents, and A. F. Young, “Intrinsic quantized anomalous hall effect in a moiré heterostructure,” Science (2019), 10.1126/science.aay5533. * Chen _et al._ (2019) Guorui Chen, Aaron L Sharpe, Eli J Fox, Ya-Hui Zhang, Shaoxin Wang, Lili Jiang, Bosai Lyu, Hongyuan Li, Kenji Watanabe, Takashi Taniguchi, _et al._ , “Tunable correlated chern insulator and ferromagnetism in trilayer graphene/boron nitride moir$\backslash$’e superlattice,” arXiv preprint arXiv:1905.06535 (2019). * Ledwith _et al._ (2020) Patrick J. Ledwith, Grigory Tarnopolsky, Eslam Khalaf, and Ashvin Vishwanath, “Fractional chern insulator states in twisted bilayer graphene: An analytical approach,” Phys. Rev. Research 2, 023237 (2020). * Repellin and Senthil (2020) Cécile Repellin and T. Senthil, “Chern bands of twisted bilayer graphene: Fractional chern insulators and spin phase transition,” Phys. Rev. Research 2, 023238 (2020). * Abouelkomsan _et al._ (2020) Ahmed Abouelkomsan, Zhao Liu, and Emil J. Bergholtz, “Particle-hole duality, emergent fermi liquids, and fractional chern insulators in moiré flatbands,” Phys. Rev. Lett. 124, 106803 (2020). * Liu _et al._ (2021) Zhao Liu, Ahmed Abouelkomsan, and Emil J. Bergholtz, “Gate-tunable fractional chern insulators in twisted double bilayer graphene,” Phys. Rev. Lett. 126, 026801 (2021). * Wilhelm _et al._ (2020) Patrick Wilhelm, Thomas C. Lang, and Andreas M. Läuchli, “Interplay of Fractional Chern Insulator and Charge-Density-Wave Phases in Twisted Bilayer Graphene,” arXiv:2012.09829 [cond-mat] (2020), arXiv: 2012.09829. * Sohal _et al._ (2018) Ramanjit Sohal, Luiz H. Santos, and Eduardo Fradkin, “Chern-simons composite fermion theory of fractional chern insulators,” Phys. Rev. B 97, 125131 (2018). * Sohal and Fradkin (2020) Ramanjit Sohal and Eduardo Fradkin, “Intertwined order in fractional chern insulators from finite-momentum pairing of composite fermions,” Phys. Rev. B 101, 245154 (2020). * Xu _et al._ (2020) Yang Xu, Song Liu, Daniel A. Rhodes, Kenji Watanabe, Takashi Taniguchi, James Hone, Veit Elser, Kin Fai Mak, and Jie Shan, “Correlated insulating states at fractional fillings of moiré superlattices,” Nature 587, 214–218 (2020), number: 7833 Publisher: Nature Publishing Group. * Jin _et al._ (2020) Chenhao Jin, Zui Tao, Tingxin Li, Yang Xu, Yanhao Tang, Jiacheng Zhu, Song Liu, Kenji Watanabe, Takashi Taniguchi, James C. Hone, Liang Fu, Jie Shan, and Kin Fai Mak, “Stripe phases in WSe2/WS2 moir\’e superlattices,” arXiv:2007.12068 [cond-mat] (2020), arXiv: 2007.12068. * Zhou _et al._ (2020) You Zhou, Jiho Sung, Elise Brutschea, Ilya Esterlis, Yao Wang, Giovanni Scuri, Ryan J. Gelly, Hoseok Heo, Takashi Taniguchi, Kenji Watanabe, Gergely Zaránd, Mikhail D. Lukin, Philip Kim, Eugene Demler, and Hongkun Park, “Signatures of bilayer Wigner crystals in a transition metal dichalcogenide heterostructure,” arXiv:2010.03037 [cond-mat] (2020), arXiv: 2010.03037. * Huang _et al._ (2020) Xiong Huang, Tianmeng Wang, Shengnan Miao, Chong Wang, Zhipeng Li, Zhen Lian, Takashi Taniguchi, Kenji Watanabe, Satoshi Okamoto, Di Xiao, Su-Fei Shi, and Yong-Tao Cui, “Correlated Insulating States at Fractional Fillings of the WS2/WSe2 Moir\’e Lattice,” arXiv:2007.11155 [cond-mat] (2020), arXiv: 2007.11155. * Tang _et al._ (2011) Evelyn Tang, Jia-Wei Mei, and Xiao-Gang Wen, “High-temperature fractional quantum hall states,” Phys. Rev. Lett. 106, 236802 (2011). * Sun _et al._ (2011) Kai Sun, Zhengcheng Gu, Hosho Katsura, and S. Das Sarma, “Nearly flatbands with nontrivial topology,” Phys. Rev. Lett. 106, 236803 (2011). * Neupert _et al._ (2011) Titus Neupert, Luiz Santos, Claudio Chamon, and Christopher Mudry, “Fractional quantum hall states at zero magnetic field,” Phys. Rev. Lett. 106, 236804 (2011). * Regnault and Bernevig (2011) N. Regnault and B. Andrei Bernevig, “Fractional chern insulator,” Phys. Rev. X 1, 021014 (2011). * Sheng _et al._ (2011) D. N. Sheng, Zheng-Cheng Gu, Kai Sun, and L. Sheng, “Fractional quantum hall effect in the absence of landau levels,” Nature Communications 2, 389 (2011). * Parameswaran _et al._ (2013) Siddharth A. Parameswaran, Rahul Roy, and Shivaji L. Sondhi, “Fractional quantum Hall physics in topological flat bands,” Comptes Rendus Physique Topological insulators / Isolants topologiques, 14, 816–839 (2013). * Bergholtz and Liu (2013) Emil J. Bergholtz and Zhao Liu, “Topological flat band models and fractional chern insulators,” International Journal of Modern Physics B 27, 1330017 (2013), publisher: World Scientific Publishing Co. * Wu _et al._ (2012) Yang-Le Wu, B. Andrei Bernevig, and N. Regnault, “Zoology of fractional chern insulators,” Phys. Rev. B 85, 075116 (2012). * Wu _et al._ (2018) Fengcheng Wu, Timothy Lovorn, Emanuel Tutuc, and A. H. MacDonald, “Hubbard model physics in transition metal dichalcogenide moiré bands,” Phys. Rev. Lett. 121, 026402 (2018). * Wu _et al._ (2019) Fengcheng Wu, Timothy Lovorn, Emanuel Tutuc, Ivar Martin, and A. H. MacDonald, “Topological insulators in twisted transition metal dichalcogenide homobilayers,” Phys. Rev. Lett. 122, 086402 (2019). * Levin and Stern (2009) Michael Levin and Ady Stern, “Fractional topological insulators,” Phys. Rev. Lett. 103, 196803 (2009). * Xiao _et al._ (2012) Di Xiao, Gui-Bin Liu, Wanxiang Feng, Xiaodong Xu, and Wang Yao, “Coupled spin and valley physics in monolayers of ${\mathrm{mos}}_{2}$ and other group-vi dichalcogenides,” Phys. Rev. Lett. 108, 196802 (2012). * Lee _et al._ (2019) Jong Yeon Lee, Eslam Khalaf, Shang Liu, Xiaomeng Liu, Zeyu Hao, Philip Kim, and Ashvin Vishwanath, “Theory of correlated insulating behaviour and spin-triplet superconductivity in twisted double bilayer graphene,” Nature Communications 10, 5333 (2019). * (57) See Supplemental Materials for (1) variational calculations of the energy for the valley polarized and intervalley coherent state, (2) Hartree-Fock calculations, (3) discussion on the possibility of the Halperin state and (4) effective theory for the transition between the valley polarized Fermi liquid and fractional Chern insulator. * Stern (2016) Ady Stern, “Fractional Topological Insulators: A Pedagogical Review,” Annual Review of Condensed Matter Physics 7, 349–368 (2016), publisher: Annual Reviews. * Note (1) One may think of another possibility, analogous to the Halperin $(m_{1},\ m_{2},\ n_{1})$ states. However, this state is not favored because of the opposite Chern number in the opposite valley. sup . * Bernevig and Regnault (2012) B. Andrei Bernevig and N. Regnault, “Emergent many-body translational symmetries of abelian and non-abelian fractionally filled topological insulators,” Phys. Rev. B 85, 075128 (2012). * Sondhi _et al._ (1993) S. L. Sondhi, A. Karlhede, S. A. Kivelson, and E. H. Rezayi, “Skyrmions and the crossover from the integer to fractional quantum hall effect at small zeeman energies,” Phys. Rev. B 47, 16419–16426 (1993). * Chatterjee _et al._ (2020) Shubhayu Chatterjee, Matteo Ippoliti, and Michael P. Zaletel, “Skyrmion Superconductivity: DMRG evidence for a topological route to superconductivity,” arXiv:2010.01144 [cond-mat] (2020), arXiv: 2010.01144. * Kumar _et al._ (2014) Krishna Kumar, Kai Sun, and Eduardo Fradkin, “Chern-simons theory of magnetization plateaus of the spin-$\frac{1}{2}$ quantum xxz heisenberg model on the kagome lattice,” Phys. Rev. B 90, 174409 (2014). * Kumar _et al._ (2015) Krishna Kumar, Kai Sun, and Eduardo Fradkin, “Chiral spin liquids on the kagome lattice,” Phys. Rev. B 92, 094433 (2015). * Zhu _et al._ (2016) W. Zhu, Shou-Shu Gong, Tian-Sheng Zeng, Liang Fu, and D. N. Sheng, “Interaction-driven spontaneous quantum hall effect on a kagome lattice,” Phys. Rev. Lett. 117, 096402 (2016). * Bultinck _et al._ (2020b) Nick Bultinck, Eslam Khalaf, Shang Liu, Shubhayu Chatterjee, Ashvin Vishwanath, and Michael P. Zaletel, “Ground state and hidden symmetry of magic-angle graphene at even integer filling,” Phys. Rev. X 10, 031034 (2020b). * Fradkin (2013) Eduardo Fradkin, _Field Theories of Condensed Matter Physics_ , 2nd ed. (Cambridge University Press, Cambridge, 2013). * Chen and Yang (2012) Hua Chen and Kun Yang, “Interaction-driven quantum phase transitions in fractional topological insulators,” Phys. Rev. B 85, 195113 (2012). Supplemental Material: Spontaneous fractional Chern insulators in transition- metal-dichalcogenides Moiré superlattices ## .1 I. Energy for valley polarized (VP) and intervalley coherent (IVC) state Here we compare the energy for the valley polarized and intervalley coherent state. We consider half filling of the topmost band including the valley degree of freedom. The wave function for the VP state can be written as $\lvert\Psi_{\text{VP}}\rangle=\prod_{k}C_{+}^{\dagger}(k)\lvert 0\rangle.$ Here we choose $\tau=+$ valley to be fully occupied. Its energy $\langle\Psi_{\text{VP}}\rvert H\lvert\Psi_{\text{VP}}\rangle$ can be decomposed into single particle contribution $E_{0}=\sum_{k}\epsilon_{+}(k)$; Hartree contribution: $E_{\mathrm{Ha}}=V(0)[\sum_{k}\lambda_{+,0}(k)]^{2}$ and the Fock contribution: $E_{\mathrm{Fo}}=-\sum_{k,q}V(q)|\lambda_{+,q}(k)|^{2}$. Here the summation of $k$ is over the whole Moiré Brillouin zone. We note that the system Hamiltonian is invariant under the following gauge transformation $C_{\tau}(k)\rightarrow\exp[i\theta_{\tau}(k)]C_{\tau}(k),$ $\lambda_{\tau,q}(k)\rightarrow\exp[i\theta_{\tau}(k)-\theta_{\tau}(k+q)]\lambda_{\tau,q}(k).$ We choose a gauge which fixes the valley pseudospin associated with $\lvert\Psi_{\text{IVC}}\rangle$ in the $x$ direction, and the wave function of the intervalley coherent state can be written as $\lvert\Psi_{\text{IVC}}\rangle=\frac{1}{\sqrt{2}}\prod_{k}[C_{+}^{\dagger}(k)+C_{-}^{\dagger}(k)]\lvert 0\rangle.$ The energy for the IVC is the sum of single particle contribution $E_{0}^{\prime}=\sum_{k}[\epsilon_{+}(k)+\epsilon_{-}(k)]/2$; Hartree contribution: $E_{\mathrm{Ha}}^{\prime}=\frac{V(0)}{4}[\sum_{k,\tau=\pm}\lambda_{\tau,0}(k)]^{2}$ and the Fock contribution $E_{\mathrm{Fo}}^{\prime}=-\frac{1}{4}\sum_{k,q}V(q)\left[|\lambda_{+,q}(k)|^{2}+|\lambda_{-,q}(k)|^{2}+\lambda_{+,q}(k)\lambda_{-,-q}(k+q)+\lambda_{-,q}(k)\lambda_{+,-q}(k+q)\right].$ It easy to check that the single particle and Hartree part of the energy for the VP and IVC are the same. We compare the Fock part of the energy. Noticing that $\lambda_{\tau,-q}(k+q)=\lambda_{\tau,q}^{*}(k)$, we have $\lambda_{+,q}(k)\lambda_{-,-q}(k+q)+\lambda_{-,q}(k)\lambda_{+,-q}(k+q)\leq 2|\lambda_{+,q}(k)\lambda_{-,-q}(k+q)|,$ where the bound is saturated when $\lambda_{+,q}(k)\lambda_{-,-q}(k+q)$ is positive real. The energy difference between the IVC and VP is $E_{\mathrm{Fo}}^{\prime}-E_{\mathrm{Fo}}\geq\frac{1}{4}\sum_{k,q}V(q)\left[|\lambda_{+,q}(k)|-|\lambda_{-,q}(k)|\right]^{2}.$ By considering time reversal oepration and $C_{2}$ rotation combined with layer flipping symmetry, we can show $|\lambda_{+,q}(k)|=|\lambda_{-,q}(k)|$. After we choose the form of $\lvert\Psi_{\text{IVC}}\rangle$ by fixing a gauge, it is not guaranteed that $\lambda_{+,q}(k)\lambda_{-,-q}(k+q)$ is positive real for _all_ $k$ and $q$. Therefore the IVC always has higher energy than that of the VP state. This conclusion is further supported by more detailed Hartree-Fock calculations below. Figure S1: $P_{v}~{}(=\sum_{\bm{k}}[\Delta_{++}(\bm{k})-\Delta_{--}(\bm{k})])$ dependence on interaction ($U$) for filling ($\nu=1$) in panel (a) and fractional filling ($\nu=1/3$) in panel (b). We observe unpolarized metal for smaller $U$, polarized metal for intermediate $U$ and a valley polarized state for large $U$. ## .2 II. Hartree-Fock calculations We treat the interaction part of the Hamiltonian using Hatree-Fock mean-field theory Bultinck _et al._ (2020b) in which the Hamiltonian can be written as $\begin{split}\mathcal{H}_{MF}=\sum_{k,\tau,\tau^{\prime}}C_{\tau}^{\dagger}(\bm{k})[(h_{0}(\bm{k})-\mu)\delta_{\tau,\tau^{\prime}}+h_{HF}^{\tau,\tau^{\prime}}(\Delta_{k},\bm{k})]C_{\tau^{\prime}}(\bm{k})-\frac{1}{2}\text{tr}~{}h_{MF}(\Delta_{k},\bm{k})\Delta_{k}^{T}\end{split}$ (S1) Here, $h_{0}(\bm{k})=h_{BM}(\bm{k})-\frac{1}{2}h_{HF}(\Delta_{0},\bm{k})$ where $\Delta_{0}$ is the reference density matrix such that $\mathcal{H}_{MF}=h_{BM}$ of symmetry unbroken state when $\Delta_{k}=\Delta_{0}$ Bultinck _et al._ (2020b). We therefore, choose $\Delta_{0}=\begin{pmatrix}\nu/2&0\\\ 0&\nu/2\end{pmatrix}~{}\forall~{}\bm{k}$. Also, the $h_{HF}$ is given by, $\begin{split}h_{HF}^{\tau,\tau^{\prime}}(\Delta_{k},k)&=\frac{U}{2N_{\text{cell}}}\sum_{\bm{G}}v_{\bm{G}}\lambda_{\tau,\bm{G}}(\bm{k})\delta_{\tau,\tau^{\prime}}\sum_{\tau^{\prime\prime},\bm{k}^{\prime}}\text{tr}[\lambda_{\tau^{\prime\prime},\bm{G}}^{\dagger}(\bm{k}^{\prime})\Delta_{\tau,\tau^{\prime\prime}}^{T}(\bm{k})]\\\ &-\frac{U}{2N_{\text{cell}}}\sum_{\bm{G},\bm{k}^{\prime}}v_{\bm{G}+\bm{k}^{\prime}}\lambda_{\tau,\bm{G}+\bm{k}^{\prime}}(\bm{k})\lambda_{\tau^{\prime},\bm{G}+\bm{k}^{\prime}}^{\dagger}(\bm{k})\Delta_{\tau,\tau^{\prime}}^{T}(\bm{k}+\bm{k}^{\prime})\end{split}$ (S2) In the above equation, the first and second terms are the Hartree and Fock contributions, respectively. Here, $N_{\text{cell}}$ is the total area of the MBZ and we have used $\bm{q}=\bm{G}+\bm{k}^{\prime}$. $\bm{k},\bm{k}^{\prime}$ are momentum vectors in the first Brillouin zone (BZ) and $\bm{G}$ is the reciprocal vector connecting different BZs. The matrix, $\lambda_{\tau,\bm{q}}(\bm{k})=\lambda_{\tau,\bm{G}+\bm{k}^{\prime}}(\bm{k})$ contains the form factors for the single particle given by, $\lambda_{\tau,\bm{G}+\bm{k}^{\prime}}(\bm{k})=\langle u_{\tau,\bm{k}}|u_{\tau,\bm{k}+\bm{G}+\bm{k}^{\prime}}\rangle$. We also have $v_{\bm{q}}=\frac{4\pi\tanh(qd)}{q\sqrt{3}a_{M}}$ for dual-gate screened Coulomb interaction. For writing the mean-field equation, we use the following condition; a) $\lambda_{+,\bm{q}}(\bm{k})=\lambda_{-,-\bm{q}}^{*}(-\bm{k})$, and b) the $\bm{G}_{j}^{th}$ component of momentum, $\bm{q}=\bm{G}_{l}+\bm{k}$ in the $l^{th}$ Moiré Lattice is generated using central BZs as, $|u_{\tau,\bm{k}+\bm{G}_{l}}(\bm{G}_{j})\rangle=|u_{\tau,\bm{k}}(\bm{G}_{j}+\bm{G}_{l})\rangle$, so as to have a consistent gauge. One can write the new quasiparticle by solving the above equation; $V^{\dagger}H_{MF}VV^{\dagger}|\psi\rangle=E_{n}V^{\dagger}|\psi\rangle$. One can evaluate the Hamiltonian as $\langle\psi|VV^{\dagger}H_{\text{MF}}VV^{\dagger}|\psi\rangle=\langle\phi|D|\phi\rangle$ where $\begin{split}|\psi(\bm{k})\rangle=\begin{pmatrix}C_{+}(\bm{k})\\\ C_{-}(\bm{k})\\\ \end{pmatrix}=V(\bm{k})|\phi(\bm{k})\rangle\end{split}=\begin{pmatrix}u_{1}(\bm{k})&u_{2}(\bm{k})\\\ v_{1}(\bm{k})&v_{2}(\bm{k})\\\ \end{pmatrix}\begin{pmatrix}\gamma_{1}(\bm{k})\\\ \gamma_{2}(\bm{k})\\\ \end{pmatrix}$ (S3) The gap $\Delta_{\tau,\tau^{\prime}}(\bm{k})=\langle C_{\tau}^{\dagger}(\bm{k})C_{\tau^{\prime}}(\bm{k})\rangle$ has to be written in terms of these new quasiparticles We now write the gap equation in the new basis, $\Delta_{\tau,\tau^{\prime}}(\bm{k})=\langle C_{\tau}^{\dagger}(\bm{k})C_{\tau^{\prime}}(\bm{k})\rangle=\begin{pmatrix}|u_{1}(\bm{k})|^{2}\langle n_{\gamma_{1}}(\bm{k})\rangle+|u_{2}(\bm{k})|^{2}\langle n_{\gamma_{2}}(\bm{k})\rangle&u_{1}^{*}(\bm{k})v_{1}(\bm{k})\langle n_{\gamma_{1}}(\bm{k})\rangle+u_{2}^{*}(\bm{k})v_{2}(\bm{k})\langle n_{\gamma_{2}}(\bm{k})\rangle\\\ u_{1}(\bm{k})v_{1}^{*}(\bm{k})\langle n_{\gamma_{1}}(\bm{k})\rangle+u_{2}(\bm{k})v_{2}^{*}(\bm{k})\langle n_{\gamma_{2}}(\bm{k})\rangle&|v_{1}(\bm{k})|^{2}\langle n_{\gamma_{1}}(\bm{k})\rangle+|v_{2}(\bm{k})|^{2}\langle n_{\gamma_{2}}(\bm{k})\rangle\end{pmatrix}.$ (S4) Here $n_{\gamma_{m}}(\bm{k})=\gamma_{m}^{\dagger}(\bm{k})\gamma_{m}(\bm{k})$ Also, filling in the new basis is given by $\begin{split}\bar{n}&=\sum_{\tau,\bm{k}}\langle C_{\tau}^{\dagger}(\bm{k})C_{\tau}(\bm{k})\rangle=\sum_{\bm{k}}\text{tr}[\Delta_{\tau,\tau^{\prime}}]=\sum_{\bm{k}}\langle n_{\gamma_{1}}(\bm{k})\rangle+\langle n_{\gamma_{2}}(\bm{k})\rangle=\nu\end{split}$ (S5) Eqs. S4 and S5 are then solved self-consistently for a fixed filling, until $\mu$ and mean field order parameter or all $\bm{k}$ converges. But, we use a relatively relaxed condition for convergence, as $\Delta_{\tau,\tau^{\prime}}(\bm{k})$ can have multiple degenerate configurations, therefore, we use $\text{max}(\Delta^{n}-\Delta^{n+1})<$ tolerance limit, where $\Delta=\sum_{\bm{k}}\Delta_{\tau,\tau^{\prime}}(\bm{k})$. In the numerical simulation, we observe that only diagonal element of $\Delta_{\tau,\tau^{\prime}}(\bm{k})$ are populated whereas the off-diagonal elements are zero, meaning that valley polarized state is the ground state. We define a valley polarization order parameter $P_{v}~{}=\sum_{\bm{k}}[\Delta_{++}(\bm{k})-\Delta_{--}(\bm{k})]$. In Fig. S1, we plot $P_{v}$ dependence on the interaction ($U$) for $(\nu=1)$ and fractional $(\nu=1/3)$ fillings using the parameters discussed in the main text and at $\theta=1.38^{\circ}$. In the case of filling ($\nu=1$) shown in Fig. S1 (a), we observe that till around $U=0.3W$ we have unpolarized metal, $i.e.$ equal number of electron in both the valleys. For an intermediate interaction, $0.3W\leq U\leq 0.8W$, we observe partially polarized metal and from $U=0.8W$ onward the system is fully polarized. Here, $W$ is the bandwidth of the non-interacting bands. The band in one valley is fully occupied and the system is a valley polarized insulator. This behavior is consistent with the results reported for twisted bilayer graphene in the Ref. Bultinck _et al._ (2020a). On the other hand, in the case of the fractional filling ($\nu=1/3$) shown in Fig. S1 (b), we observe the system to be unpolarized metal till $U=0.85W$. In the $0.85W\leq U\leq 1.4W$ regime, partially polarized metal is observed and finally above, $U=1.4W$, the system saturates into a completely polarized metal. Note that in this filling fraction, one can only partially fill the lower mean-field band. Hence, the system remains a metal in contrast to the case with $v=1$. ## .3 III. Possibility of the Halperin state Here we ague that it is unfavorable to host the Halperin $(m_{1},\ m_{2},\ n_{1})$ states in TMD Moiré superlattice, because of the opposite Chern number in the opposite valleys. For a flat Chern band, a good starting point to understand the physics is the Landau level by neglecting the variation of the Berry curvature and band dispersion. The Chern bands in the $\pm$ valleys in TMD Moiré superlattice can be treated as Landau levels stabilized by opposite effective magnetic field, $\pm B$. The wave function for the Halperin $(m_{1},\ m_{2},\ n_{1})$ state is $\lvert\Psi_{H}\rangle=\prod_{i<j}\left(z_{+i}-z_{+j}\right)^{m_{1}}\left(z_{-i}-z_{-j}\right)^{m_{2}}\left(z_{+i}-z_{-j}\right)^{n_{1}}\exp\left[-\sum_{i,\tau=\pm}\frac{1}{4l_{B}^{2}}|z_{\tau i}|^{2}\right],$ where $z_{\pm i}=x+iy$ and $l_{B}=\sqrt{\hbar c/eB}$ is the magnetic length. Introducing composite particle through the flux attachment Fradkin (2013) $\phi_{\tau}(r)=\exp\left(i\Theta_{\tau}\right)\psi_{\tau}(r),$ $\Theta_{+}=m_{1}\int dr_{2}\theta(r_{1}-r_{2})\rho_{+}(r_{2})+n_{1}\int dr_{2}\theta(r_{1}-r_{2})\rho_{-}(r_{2}),$ $\Theta_{-}=m_{2}\int dr_{2}\theta(r_{1}-r_{2})\rho_{-}(r_{2})+n_{1}\int dr_{2}\theta(r_{1}-r_{2})\rho_{+}(r_{2}),$ where $\theta(r_{1}-r_{2})$ is the angle between the vector $r_{2}-r_{1}$ and the $x$ axis. $\phi_{\tau}(r)$ is the composite particle wave function and $\psi_{\tau}(r)$ is the electron wave function. $\rho_{\tau}$ is the charge density. The effective magnetic experienced by the composite particles are $B_{\text{eff},+}=B-\Phi_{0}\left(m_{1}\rho_{+}+n_{1}\rho_{-}\right),\ \ \ B_{\text{eff},-}=-B-\Phi_{0}\left(m_{2}\rho_{-}+n_{1}\rho_{+}\right),$ with $\Phi_{0}=2\pi\hbar c/e$. It is not possible to make $B_{\text{eff},\pm}$ vanishing for any positive integers $(m_{1},\ m_{2},\ n_{1})$. Therefore the Halperin state is generally unflavored in energetics. ## .4 IV. Effective theory for the transition between the valley polarized Fermi liquid and fractional Chern insulator If we take the Landau level point of view by assuming completely flat bands with uniform Berry curvature, we can write down Ginzburg-Landau free energy to describe the phase transition between the Fermi liquid with valley polarization and fractional Chern insulator (FCI), analogous to the case for the transition between the fractional topological insulator and superfluid Chen and Yang (2012). In this consideration, we can think that two valleys experience an opposite magnetic field $B_{\tau}=\nabla\times a_{\tau}$. The Lagrangian can be written as $\mathcal{L}_{\tau}=\bar{\psi}\left(i\partial_{t}-A_{0}-a_{\tau,0}\right)\psi-\frac{1}{2m_{\psi}}|(-i\nabla- A-a_{\tau})\psi|^{2}-V(\psi)+\frac{1}{4\pi m}\epsilon^{\mu\nu\lambda}a_{\mu}\partial_{\nu}a_{\lambda},$ $\mathcal{L}_{\phi}=\bar{\phi}_{\tau}\left(i\partial_{t}-A_{0}\right)\phi_{\tau}-\frac{1}{2m_{\phi}}|(-i\nabla-A)\phi_{\tau}|^{2}-\alpha|\phi_{\tau}|^{2}-\frac{\beta}{2}|\phi_{\tau}|^{4},$ $\mathcal{L}_{\phi}=-g|\phi|^{2}|\psi|^{2}.$ Here $a_{\tau}$ is the dynamical gauge field with Chern-Simons term. $A$ is the external electromagnetic gauge fields. $\psi$ describes the FCI condensate and $\phi_{\tau}=\langle C_{\tau}^{\dagger}C_{\tau}\rangle$ is valley polarization order parameter, and their spatial average value obey $\langle|\psi|^{2}+|\phi_{\tau}|^{2}\rangle=\rho_{0}$ with $\rho_{0}$ the electron density. Here $m$ is an integer and we consider electron filling at $1/m$. The term with $g>0$ describes the repulsion between the FCI condensate and valley polarization condensate. $V(\psi)$ is the potential for the FCI condensate which depends on the electron interaction. From this construction, it is clear why the Fermi liquid with valley polarization (VP) is favored over the the Fermi liquid with inter-valley coherent state (IVC). In VP, the effective magnetic field due to the valley contrasting Chern band cancels. In $\mathcal{L}_{\phi}$, there is no $a_{\tau}$ gauge field, which is energetically favorable because of energy cost associated with the Meissner screening of $a_{\tau}$ if it is present. Whereas for the IVC, we need to condense $\phi_{\mathrm{IVC}}\sim\langle C_{+}^{\dagger}C_{-}\rangle$. In this case, we will have coupling to the $a_{\tau}$ field: $-\frac{1}{2m_{\mathrm{IVC}}}|(-i\nabla-A-2a_{\tau})\phi_{\mathrm{IVC}}|^{2}$, which costs energy due to the Meissner effect. The transition from the Fermi liquid with VP to FCI can be understood as follows. For an intermediate interaction, the Fermi liquid with VP is favored. Due to the repulsion between the FCI and VP condensates, electrons condense into the $\phi_{\tau}$ channel and the Fermi liquid with VP is stabilized. When the interaction becomes strong, the FCI is favored by the $V(\psi)$ term, and more electrons condense into the FCI state.
8k
arxiv_papers
2101.01259
# Prior Knowledge Input to Improve LSTM Auto-encoder-based Characterization of Vehicular Sensing Data Nima Taherifard, Murat Simsek , Charles Lascelles, and Burak Kantarci N. Taherifard, M. Simsek and B. Kantarci are with the School of Electrical Engineering and Computer Science, University of Ottawa, Ottawa, ON, Canada. E-mails: {ntahe062, murat.simsek, burak.kantarci}@uottawa.ca.Charles Lascelles is with Raven Connected, 441 MacLaren St, Ottawa, ON, K2P 2H3, Canada. Email: [email protected] ###### Abstract Precision in event characterization in connected vehicles has become increasingly important with the responsive connectivity that is available to the modern vehicles. Event characterization via vehicular sensors are utilized in safety and autonomous driving applications in vehicles. While characterization systems have been shown to be capable of predicting the risky driving patterns, precision of such systems still remains an open issue. The major issues against the driving event characterization systems need to be addressed in connected vehicle settings, which are the heavy imbalance and the event infrequency of the driving data and the existence of the time-series detection systems that are optimized for vehicular settings. To overcome the problems, we introduce the application of the prior-knowledge input method to the characterization systems. Furthermore, we propose a recurrent-based denoising auto-encoder network to populate the existing data for a more robust training process. The results of the conducted experiments show that the introduction of knowledge-based modelling enables the existing systems to reach significantly higher accuracy and F1-score levels. Ultimately, the combination of the two methods enables the proposed model to attain 14.7% accuracy boost over the baseline by achieving an accuracy of 0.96. ###### Index Terms: Deep learning, knowledge-based modelling, encoder networks, intelligent transportation, LSTMs, vehicular sensing. ## I Introduction Leveraging precise measuring equipment along with the increasing computing power of the vehicles, the driving event characterization (DEC) systems are the key components to the road safety of the vehicles in an intelligent transportation system (ITS) where highly reliable connectivity is available between the vehicles [1]. Moreover, the response time improvement introduced by 5G [2] further incentivizes the investigation and refinement of the DEC for vehicular safety systems. Typically, DECs are mainly deployed in distributed sensing platforms such as mobile devices and vehicles on the road and the inertial sensor data is the main instrument used by the systems. Utilizing the in-vehicle inertial sensor data allows for more prompt detection with less lag that can be utilized in the safety applications since inertial data are direct measures of the physical forces applied to the vehicles [3]. It is proven that specific driving characteristics like sudden lane changes, acceleration and such are tightly associated as risky driving behaviors [4]. To detect these behaviors, the DEC systems follow the objective of detecting anomaly behavior or classifying the pre-defined behavior in the data [5]. These events include risky driving patterns such as aggressive lane changing, aggressive acceleration, harsh brakes, etc. which occur in small time windows during regular driving patterns and cause the issue of data imbalance. The most recent developments in artificial intelligence and machine learning have made the real-time detection of the driving events more feasible and accurate [6] by catering accurate methods for analysis of time-series data. More specifically, the recurrent networks which are able to store a brief history of the past input data at any time that makes them the feasible tools for time-series data analysis. Recurrent models such as Long short-term memory (LSTM) architecture allows the machine-learning systems to reserve and consider longer history of the data in the decision making by the networks. Furthermore, storing recent data information is critical for pattern recognition methods employed for driving event characterization systems. LSTM networks are a refined type of recurrent neural networks which are utilized to extract temporal features in time-series data [7]. Such networks are the main enabling factors of the contemporary weather forecasting and language models which heavily depend on the sequential history of the most recent data [8]. Figure 1: The PKI modulation as module to the classification networks. It accepts the knowledge of the input signals as additional feature for an optimized training process. In this paper, in order to study the effect of knowledge-based modelling, a previously studied and optimized convolutional LSTM encoder network is chose for baseline experiments. The network allows the system to extract both spatial and temporal features of the signals in order to achieve best performance. Convolutional layers are utilized as spatial feature extraction module since the events have distinct patterns on each axis of the input data while LSTM network performs temporal feature extraction to gain knowledge of the time dependant patterns of the signals. Moreover, a recurrent encoder- decoder network is proposed to not only denoise the input signals but also learn the behavior pattern of individual signals for precise reconstruction of synthetic signals. The main contribution of this paper is the introduction of the Prior Knowledge Input (PKI) modelling into machine learning-based recurrent event characterization models. Knowledge based modelling has been developed to integrate the existing knowledge to the learning process so further improvement can be possible through the mapping between existing knowledge and desired responses. Prior Knowledge Input (PKI) that is one of the knowledge based methods is considered to obtain the better response (up to 0.96 accuracy) when compared to the previous work [5]. PKI utilizes the existing knowledge of driving events as supplementary input alongside with the sensed input signals. As illustrated in Fig. 1, the PKI is modulated to the existing characterization networks as add-on, which can enable the models to reach the optimal training conditions. The rest of the paper is organized as follows. Related work is presented in Section II whereas Section III presents the methodology. Performance evaluation and numerical outcome are reported and discussed in Section IV. Finally, Section V concludes the work and offers future intentions. ## II Related Work Numerous studies have been proposed to accelerate the feasibility of the driving event characterization models. Additionally, neural networks have surpassed the machine learning and other intelligent methods in terms of performance. The signal processing domain has also been affected by the aforementioned fields, therefore many researchers are focusing on deep learning methods in order to improve the state-of-the-art event characterization systems. In this section, we first summarize the most recent driving event characterization models and then review the developments of the knowledge-based systems. ### II-A Driving Event Detection Systems Since vehicular sensory data has become increasingly available, the use of neural networks which are highly dependant on big data has become feasible for vehicular applications, including the driving event characterization applications. The systems that utilize such networks are typically focused on visual or sensory data inputs from the vehicles. Ultimately, these models are trained to extract the desired features from the data and identify the anomalies or the pre-defined behaviors from the features. A series of studies were conducted on the feature extraction methods to gain knowledge from the inertial or global positional sensed data. Sun et al. [9] propose a system to identify irregular driving on the highways based on accurate positional satellite measurements. A unique calculation method is studied in [10] to model the velocity, acceleration, and dynamics of the vehicles. A static threshold value is then applied to the calculated metric in order to detect driving behaviors. Additionally, more studies are focused on the combination modelling of the vehicular measured data. In [11], the authors have examined the correlation between the vehicle velocity and lateral acceleration. Investigating this correlation, they conclude that the threshold applied to the lateral acceleration of the vehicle in an event characterization model has to be decreased as the vehicle gains speed. Moreover, smartphone reliant driving event characterization systems are also studied in the literature [12] to utilize the cost-efficient in-built sensors of the smartphones. In [13], the authors designed a system to detect sudden changes of acceleration and unsafe vehicle turns in order to categorise the drivers as aggressive or non-aggressive drivers. Figure 2: Data collection method. The utilized sensor records acceleration along the three axes of the vehicles. Modern day transportation industry is adopting machine-learning systems that can improve through data-driven approaches [14, 15]. Optimized image processing methods are utilized in [16] to categorize driving incidents and make decision on alerting the first-responders to the location. A pre-trained convolutional neural network is used in [17] on the phase-space reconstructed vehicle trajectories to evaluate driving behaviors. The study quantitatively evaluates the abnormal behaviors to then detect the abnormal drivers. Moreover, the distracting behaviors that lead to unsafe driving are the subject of various studies. Rao et al. [18] takes the approach of processing the captured camera data via convolutional neural networks and principal component analysis (PCA) to whiten the camera feed and classify the whitened images as distracted and non-distracted driver activities. Recurrent neural networks have been improved and made practical more recently. In addition to the progress of various types of recurrent functions, numerous recurrent-based network architectures have been introduced for innovative applications. There has been novel solutions to the major issue of recurrent neural networks, i.e., the vanishing gradient problem which led the networks to face issues in the training process. Long Short-term Memory (LSTM) and Gated Recurrent Units (GRUs) models are among most recent solutions to the issue. Furthermore, there has been numerous studies in order to expand the applications of the recurrent networks. [19] proposes data compression method utilizing recurrent networks. Combining generative and discriminative networks, a denoising auto-encoder is introduced in [20] in which the model has the ability to take the input features on the fly. An auto-encoder network with the support of random sampling from the encoder latent space is proposed with generative objective in [21]. The model mixes a combination of adversarial and reconstruction losses, but unlike Generative Adversarial Networks (GAN) discriminators, the authors employed progressive growing of generator and encoder networks. ### II-B Knowledge-based Modeling Knowledge-based modelling is suitable for the driving event classification systems since it only requires the input-output relations [22]. In a conventional neural network, a large training dataset is required to meet the optimal stopping conditions which is not convenient in vehicular applications where there is imbalance to the data. In [23], machine-learning detection models are boosted by novel knowledge-based methods and feature selection mechanisms. Utilizing Prior Knowledge Input (PKI) and Prior Knowledge Input with Difference (PKID), the authors are able to significantly improve the average accuracy of the existing machine-leaning methods. The PKI is widely adopted in neural network-based modeling systems such as adversarial task detection in mobile crowd-sensing to microwave modelling for computer aided design [24]. Using the PKI modelling, it is possible to remarkably improve the driving event characterization model in terms of accuracy and overall performance. Indeed, further experimental results are needed which are presented in the following sections. ## III Methodology In this section, the prior knowledge-based signal classification model which is modulated on top of our previous signal generation network is presented in detail. Machine-learning is modelled to imitate the learning process of the brain, and its classification accuracy and the training process efficiency can be improved by modelling the networks utilizing a knowledge-based modulation. We employ a prior knowledge input method to boost our classification performance which is presented in Section III-C. A brief introduction of our dataset collection process and the details of our signal generation model are revisited in Section III-A and Section III-B, respectively whereas the parameters are presented in Table I. TABLE I: Notation description for mathematical formulas. Notation | Description ---|--- $x$ | Signal features $\hat{x}$ | Synthetic signal features j | Signal dimension i | Signal length C | Number of classes t | Target category of the signal $e_{PKI}$ | Error value of PKI model n | Training iteration $P_{c}$ | Prediction of the classifier network ### III-A Driving Event Data Gathering Both the signal generation network as well as the knowledge-based classification network utilize raw accelerometer sensor data over varied recording sessions. In order to collect the original signals, Raven OBD-II sensor kit 111Raven OBD-II Sensor Kit https://ravenconnected.com/ is mounted on the dashboard of the vehicles. The sensor orientation is perpendicular to the latitudinal, altitudinal, and longitudinal axes of the vehicle as illustrated in Fig. 2. The collected raw signal data are distributed among five pre-defined risky driving behaviors and on multiple vehicles with different physical features. The pre-defined behaviors include: Aggressive Acceleration (AA), Harsh Braking (HB), Harsh Left Lane Changes (HL), Harsh Right Lane Changes (HR) events in addition to Regular Driving (RD) events. Figure 3: Illustration of the baseline convolutional recurrent neural network. Multiple convolutional and LSTM feature extraction units are utilized before the classification network. ### III-B Revisiting Synthetic Signal Generation In an attempt to overcome the data imbalance and infrequency, we previously proposed an auto-encoder model [25] which is able to accurately extract the underlying behavior of the signals and generate precise synthetic signals. The model consists of two modules, namely, a LSTM recurrent encoder-decoder network which is leveraged in order to populate the training dataset. The second module is used for classification of the signals and is trained on the synthetic dataset. In order to generate the signals, a windowing mechanism is ran over each individual raw signal. Using the windowing mechanism, the raw signals are split into several signal windows of fixed length with overlapping sections. The overlapping is performed in order to avoid discontinuities in the windowed signals. The auto-encoder consists of an encoding network which encodes the input signals to vectors of lower dimension that are feature-rich and contain less noise [26]. Moreover, the encoded vectors are used by the decoding network to reconstruct synthetic signals. The synthetic signal reconstruction is carried out to populate the events in the dataset and to strengthen the classification training process by mitigating data imbalance. To train the recurrent auto-encoder for synthetic signal generation, the signal windows are infused with randomly generated noise and fed as the input to the auto-encoder network. The recurrent auto-encoder network concludes a two-layer LSTM network as the encoder which maps the signal windows into fixed-size vectors. It is worth mentioning that the conversion of the signals into lower dimension vectors by the encoder network has the property of signal denoisification since high dimensional data is often more convenient to learn in lower dimensions as it contains less noise. The decoder network attempts to reconstruct the original signals from the vectors. We set the network to generate 10 times synthetic signal windows for the training process of the classification module. In the training process, the auto-encoder network is trained in self-supervised manner with the intention of minimizing the mean absolute error (1) of the original and generated signals. $\sum_{j}\sum_{i}\left\|x^{\left(j\right)}-{\hat{x}}^{\left(j\right)}\right\|^{2}$ (1) In order to demonstrate the improvement introduced by the PKI based modelling proposed in this work, we trained a feed forward multi-layer perceptron (MLP) of 3 layers and a convolutional recurrent classification (ConvLSTM) model which adds the spatio-temporal feature extraction layers to the MLP network. The ConvLSTM model utilizes multiple feature extraction layers of decreasing sizes as illustrated in Fig. 3. However, the objective of both classification networks is to categorize the signals into 5 pre-defined driving event categories using categorical cross-entropy which is implemented as the softmax (2) of cross-entropy function (3). $f\left(x\right)=\frac{e^{x}}{\sum_{C}e^{x}}$ (2) $CE=-\sum_{C}t\;log(f(x))$ (3) ### III-C Prior Knowledge Input To boost the performance of the classification networks on the driving event signals, we propose knowledge-based schemes. The Prior Knowledge Input (PKI) method, specifically, is used to incorporate the existing knowledge of the signals into the characterization process. Doing so allows for a more efficient and less complex training process which requires less amount of training data [23] and aids the characterization systems to push past the limited accuracy of deep learning methods and achieve the potential accuracy rates [22]. Moreover, any modelling scheme can be utilized in order to generate the input knowledge when there is no available prior knowledge of the inputs. Though, neural networks are often the method of choice for the knowledge-based modelling [27]. The PKI model embeds the experience/knowledge of the signal category into training process which allows for reduction of model complexity through augmentation of the model inputs. The training and testing process of the PKI model is shown in Fig 4. The PKI network is implemented in 2 hidden layers that utilize $tanh$ activation function with a softmax classification layer to output the event prediction. The training process can be formulated with (4) while iterating to lower the margin of the network predictions and the prior knowledge (5) for $n$ training iterations. The inference process is also calculated by (6) as illustrated in Fig. 4. $\left(PKI\right)_{Train}=\arg\min_{x}{\left\|\;...\;\;\;e^{(n)^{T}}_{PKI}\;\;...\;\right\|}$ (4) $e^{(i)}_{PKI}=C_{events}(x^{(i)})-PKI(x^{(i)},P^{(i)}_{c})$ (5) $P_{PKI}=(PKI)_{Train}(x_{,}P_{c})$ (6) Figure 4: Training and testing process of the prior knowledge input method to improve existing performance of driving event characterization systems. ## IV Experimental Evaluation ### IV-A Experimental Settings All the experiments are performed on the gathered dataset of 70 driving event sessions populated by 10 times synthetic signals using the auto-encoder network. To keep the experiments credible, all tests are performed on an identical testset. Initially, accelerometer sensor signals with variable duration of 2000 to 3600 milliseconds are captured. The accelerometer recording frequency is set to 25 Hz. In total, 70 driving event sessions are recorded. The distribution of the collected dataset is demonstrated in Table II. Originally, 12 AA, 13 HB, 16 HL, 15 HR, and 14 RD driving sessions were accumulated in the dataset. The sessions are split into 572 fixed-size event windows through the sliding mechanism using 600 millisecond window length and 50% overlap settings as demonstrated in Fig. 5. Last but foremost, a randomly selected training dataset of 70% (400) sliced windows is selected and 30% (172) sliced windows are kept unseen by the systems for testing. TABLE II: Training signal count distribution before and after slicing over 5 classes. Event Type | Event Sessions | Sliced Windows ---|---|--- HB | 13 | 104 AA | 12 | 108 HL | 16 | 126 HR | 15 | 121 RD | 14 | 113 Figure 5: Abstract visualization of the signal slicing performed before the training process. The signals are sliced into 6000 ms windows with 0.5 overlap value. Figure 6: Training process flow of the PKI method. The vector representation of the noisy raw signals are extracted from the encoder network. The vectors are utilized by the decoder network for synthetic signal generation and by the PKI classification network for characterization of the events. The synthetic data generation is don on the training data through the auto- encoder network with 3 LSTM network with decreasing hidden layers, from 1000 to 300 cells. However, each individual input signal is first duplicated into 10 individual signals and each is infused with random Gaussian noise. The noisy signals are mapped to vectors of size 300 at the final hidden layer of the encoder network. Subsequently, the decoder network is implemented as a mirrored LSTM network with the same dimensions to the encoder and attempts to reproduce the raw signals from the noisy inputs. Furthermore, the decoder output (i.e. synthetic training data) is used in the training process of the classifier networks. However, in the inference process, the decoder network is deactivated and the encoded vectors are directly passed to the classifier network. The process flow of the system is depicted in Fig. 6. In order to demonstrate the impact of the PKI module, we chose a fully connected feed forward classifier network, more specifically a Multi-Layer Perceptron (MLP), as baseline as well as an optimized convolutional recurrent classifier network (ConvLSTM) for further experimentation. Moreover, the classification modules perform a supervised task utilizing categorical cross- entropy for optimization on each training iteration and aim to lower the margin of the predicted events and the target categories. Adam optimizer with training parameter of $\beta_{1}=0.9$ and $\beta_{1}=0.95$ and $\epsilon=10^{-8}$ is selected for both classifier networks. Lastly, the networks are trained with learning rate of 0.03 until the stopping is triggered. ### IV-B Numerical Results As mentioned earlier, the MLP and ConvLSTM classification networks are trained on our driving event dataset with and without the PKI modulation. Additionally, the networks are trained and tested on raw signals, in the absence of the reconstructed synthetic data, for a more comprehensive demonstration of the PKI benefits. Through testing of the models on identical testset, the impact of the PKI model is demonstrated in Table III. Collecting the experimental results, the performance improvement of PKI modulation can be observed across all the tested classification models. The experiments on each classifier network are performed over 10 separate runs for statistical presentation of the results which are stated with confidence levels of 95%. While the baseline MLP classifier gained the most significant accuracy improvement (by average of over 5%), the PKI modulation proved to increase our most accurate model (i.e. auto-encoder with ConvLSTM classifier) to over 96% average accuracy as shown in Fig. 7. Last but not least, as a benefit of PKI accuracy improvements, the decrease in false positive cases of the models is reflected in improved precision, recall, and therefore F1-score values across the models. Figure 7: Prior Knowledge Input modulation impact comparison. TABLE III: Numerical results. PKI Modulation | Model | Accuracy | Precision | Recall | F1-Score ---|---|---|---|---|--- OFF | MLP | 0.624$\pm$0.014 | 0.27$\pm$0.011 | 0.30$\pm$0.012 | 0.29$\pm$0.012 OFF | ConvLSTM | 0.814$\pm$0.009 | 0.77$\pm$0.010 | 0.79$\pm$0.009 | 0.78$\pm$0.009 OFF | AE + MLP | 0.932$\pm$0.007 | 0.89$\pm$0.008 | 0.90$\pm$0.008 | 0.89$\pm$0.007 OFF | AE + ConvLSTM | 0.940$\pm$0.004 | 0.12$\pm$0.003 | 0.11$\pm$0.002 | 0.11$\pm$0.003 ON | MLP | 0.678$\pm$0.009 | 0.39$\pm$0.010 | 0.44$\pm$0.009 | 0.42$\pm$0.009 ON | ConvLSTM | 0.861$\pm$0.008 | 0.68$\pm$0.008 | 0.62$\pm$0.010 | 0.65$\pm$0.009 ON | AE + MLP | 0.954$\pm$0.006 | 0.87$\pm$0.008 | 0.90$\pm$0.006 | 0.88$\pm$0.007 ON | AE + ConvLSTM | 0.961$\pm$0.003 | 0.87$\pm$0.002 | 0.93$\pm$0.002 | 0.90$\pm$0.002 ## V Conclusion This paper has presented a novel approach to boost the classification performance for risky driving events that are recognized from vehicular sensors. To do so, the integration of a Prior Knowledge Input (PKI) modelling into the event characterization networks has been proposed to not only improve the overall classification accuracy but also reduce the false detections. The PKI model leverages the existing knowledge of the input signals as an supporting feature in order to lower the complexity of the model and therefore improve the detection accuracy of the baseline classification networks. The results prove that the performance of the classification models can be significantly improved with the introduction of the PKI modelling. Our results reveal that using the PKI method improves the performance of the baseline MLP classifier by over 5%. Similarly the ConvLSTM network experiences a 4.7% improvement when coupled with the prior knowledge module while the performance of the auto-encoder with ConvLSTM is also improved by an additional 2%. Future extensions of this work include extending the experiments to run on larger datasets where data scarcity is not a challenge. Furthermore, characterization of additional behaviour classes is also included our ongoing work. ## Acknowledgment This work was supported in part by the Natural Sciences and Engineering Research Council of Canada (NSERC) DISCOVERY RGPIN/2017-04032 and Ontario Centres of Excellence (OCE) TalentEdge Internship Project 32815 in collaboration with Raven Connected. ## References * [1] F. A. Silva, A. Boukerche, T. R. B. Silva, L. B. Ruiz, E. Cerqueira, and A. A. Loureiro, “Vehicular networks: A new challenge for content-delivery-based applications,” ACM Comp Surv, vol. 49/1, pp. 1–29, 2016. * [2] C. Campolo, A. Molinaro, A. Iera, and F. Menichella, “5g network slicing for vehicle-to-everything services,” IEEE Wireless Communications, vol. 24, no. 6, pp. 38–45, 2017. * [3] J. Ryu and J. C. Gerdes, “Integrating inertial sensors with global positioning system (gps) for vehicle dynamics control,” J. Dyn. Sys., Meas., Control, vol. 126, no. 2, pp. 243–254, 2004. * [4] J. M. Scanlon, R. Sherony, and H. C. Gabler, “Models of driver acceleration behavior prior to real-world intersection crashes,” IEEE Trans. on Intel. Transportation Systems, vol. 19/3, pp. 774–786, 2017. * [5] N. Taherifard, M. Simsek, C. Lascelles, and B. Kantarci, “Attention-based event characterization for scarce vehicular sensing data,” IEEE Open Journal of Vehicular Technology, vol. 1, pp. 317–330, 2020. * [6] Z. Che, S. Purushotham, K. Cho, D. Sontag, and Y. Liu, “Recurrent neural networks for multivariate time series with missing values,” Scientific reports, vol. 8, no. 1, pp. 1–12, 2018. * [7] S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural computation, vol. 9, no. 8, pp. 1735–1780, 1997. * [8] A. Sagheer and M. Kotb, “Time series forecasting of petroleum production using deep LSTM recurrent networks,” Neurocomputing, vol. 323, pp. 203–213, 2019. * [9] R. Sun, W. Y. Ochieng, and S. Feng, “An integrated solution for lane level irregular driving detection on highways,” Transportation Research Part C: Emerging Technologies, vol. 56, pp. 61–79, 2015. * [10] A. S. Zeeman and M. J. Booysen, “Combining speed and acceleration to detect reckless driving in the informal public transport industry,” in Intl. IEEE Conf. on Intelligent Transportation Systems, pp. 756–761, 2013. * [11] L. Eboli, G. Mazzulla, and G. Pungillo, “Combining speed and acceleration to define car users’ safe or unsafe driving behaviour,” Transportation research part C: emerging technologies, vol. 68, pp. 113–125, 2016. * [12] C. Saiprasert, T. Pholprasit, and S. Thajchayapong, “Detection of driving events using sensory data on smartphone,” Intl. J. of intelligent transportation systems research, vol. 15, no. 1, pp. 17–28, 2017. * [13] R. Chhabra, S. Verma, and C. R. Krishna, “Detecting aggressive driving behavior using mobile smartphone,” in Intl. Conf. on Communication, Computing and Networking, pp. 513–521, Springer, 2019. * [14] M. Soyturk, K. N. Muhammad, M. N. Avcil, B. Kantarci, and J. Matthews, “From vehicular networks to vehicular clouds in smart cities,” in Smart Cities and Homes, pp. 149–171, Elsevier, 2016. * [15] A. Boukerche, B. Kantarci, and C. Kaptan, “Towards ensuring the reliability and dependability of vehicular crowd-sensing data in gps-less location tracking,” Perv.& Mobile Comp., vol. 68, p. 101248, 2020. * [16] N. Taherifard, M. Simsek, and B. Kantarci, “Bridging connected vehicles with artificial intelligence for smart first responder services,” in IEEE Global Conf. on Signal and Information Processing, pp. 1–5, 2019. * [17] X. He, L. Xu, and Z. Zhang, “Driving behaviour characterisation by using phase-space reconstruction and pre-trained convolutional neural network,” IET Intelligent Transport Systems, vol. 13, no. 7, pp. 1173–1180, 2019\. * [18] X. Rao, F. Lin, Z. Chen, and J. Zhao, “Distracted driving recognition method based on deep convolutional neural network,” Journal of Ambient Intelligence and Humanized Computing, pp. 1–8, 2019. * [19] G. Toderici, D. Vincent, N. Johnston, S. Jin Hwang, D. Minnen, J. Shor, and M. Covell, “Full resolution image compression with recurrent neural networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5306–5314, 2017. * [20] A. Ashfahani, M. Pratama, E. Lughofer, and Y.-S. Ong, “Devdan: Deep evolving denoising autoencoder,” Neurocomputing, vol. 390, pp. 297–314, 2020. * [21] A. Heljakka, A. Solin, and J. Kannala, “Pioneer networks: Progressively growing generative autoencoder,” in Asian Conference on Computer Vision, pp. 22–38, Springer, 2018. * [22] M. Simsek and A. Aoad, “Efficient reconfigurable microstrip patch antenna modeling exploiting knowledge based artificial neural networks,” in Simulation-Driven Modeling and Optimization (S. Koziel, L. Leifsson, and X.-S. Yang, eds.), (Cham), pp. 185–206, Springer International Publishing, 2016\. * [23] M. Simsek, B. Kantarci, and A. Boukerche, “Knowledge-based machine learning boosting for adversarial task detection in mobile crowdsensing,” in IEEE Symp. on Comp. and Communications, [virtual], July 2020. * [24] M. Simsek, Q.-J. Zhang, H. Kabir, Y. Cao, and n. s. Sengor, “The recent developments in microwave design,” Int. J. of Mathematical Modelling and Numerical Optimisation, vol. 2, pp. 213 – 228, 04 2011. * [25] N. Taherifard, M. Simsek, C. Lascalles, and B. Kantarci, “Machine learning-driven event characterization under scarce vehicular sensing data,” in IEEE International Workshop on Computer Aided Modeling and Design of Communication Links and Networks (CAMAD), 2020. * [26] W. Yu, G. Zeng, P. Luo, F. Zhuang, Q. He, and Z. Shi, “Embedding with autoencoder regularization,” in Joint European Conf. on ML and Knowledge Discovery in Databases, pp. 208–223, Springer, 2013. * [27] M. Simsek, “Efficient neural network modeling of reconfigurable microstrip patch antenna through knowledge-based three-step strategy,” International Journal of Numerical Modelling: Electronic Networks, Devices and Fields, vol. 30, no. 3-4, p. e2160, 2017.
4k
arxiv_papers
2101.01272
# Starshade Rendezvous: Exoplanet Sensitivity and Observing Strategy Andrew Romero-Wolf Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA 91109, USA Geoffrey Bryden Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA 91109, USA Sara Seager Department of Earth and Planetary Sciences, and Department of Physics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA N. Jeremy Kasdin University of San Francisco, College of Arts and Sciences, San Francisco, CA 94117, USA Jeff Booth Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA 91109, USA Matt Greenhouse NASA Goddard Space Flight Center, Greenbelt, MD 20771, USA Doug Lisman Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA 91109, USA Bruce Macintosh Kavli Institute for Particle Astrophysics and Cosmology, Stanford University, Stanford, CA 94305, USA Stuart Shaklan Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA 91109, USA Melissa Vess NASA Goddard Space Flight Center, Greenbelt, MD 20771, USA Steve Warwick Northrop Grumman Aerospace Systems, Redondo Beach, CA 90278, USA David Webb Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA 91109, USA John Ziemer Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA 91109, USA Andrew Gray Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA 91109, USA Michael Hughes Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA 91109, USA Greg Agnes Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA 91109, USA Jonathan W. Arenberg Northrop Grumman Aerospace Systems, Redondo Beach, CA 90278, USA S. Case Bradford Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA 91109, USA Michael Fong Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA 91109, USA Jennifer Gregory Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA 91109, USA Steve Matousek Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA 91109, USA Jason Rhodes Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA 91109, USA Phil Willems Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA 91109, USA Simone D’Amico Stanford University, Stanford, CA 94305, USA John Debes Space Telescope Science Institute, Baltimore, MD 21218, USA Shawn Domagal- Goldman NASA Goddard Space Flight Center, Greenbelt, MD 20771, USA Sergi Hildebrandt Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA 91109, USA Renyu Hu Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA 91109, USA Alina Kiessling Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA 91109, USA Nikole Lewis Department of Astronomy and Carl Sagan Institute, Cornell University, Ithaca, NY 14853, USA Maxime Rizzo NASA Goddard Space Flight Center, Greenbelt, MD 20771, USA Aki Roberge NASA Goddard Space Flight Center, Greenbelt, MD 20771, USA Tyler Robinson Department of Astronomy and Planetary Science, Northern Arizona University, Flagstaff, AZ 86011, USA Leslie Rogers Astronomy Department, University of Chicago, Chicago, IL 60637, USA Dmitry Savransky Sibley School of Mechanical and Aerospace Engineering, Cornell University, Ithaca, NY 14853, USA Chris Stark Space Telescope Science Institute, Baltimore, MD 21218, USA ###### Abstract Launching a starshade to rendezvous with the Nancy Grace Roman Space Telescope would provide the first opportunity to directly image the habitable zones of nearby sunlike stars in the coming decade. A report on the science and feasibility of such a mission was recently submitted to NASA as a probe study concept [1]. The driving objective of the concept is to determine whether Earth-like exoplanets exist in the habitable zones of the nearest sunlike stars and have biosignature gases in their atmospheres. With the sensitivity provided by this telescope, it is possible to measure the brightness of zodiacal dust disks around the nearest sunlike stars and establish how their population compares to our own. In addition, known gas-giant exoplanets can be targeted to measure their atmospheric metallicity and thereby determine if the correlation with planet mass follows the trend observed in the Solar System and hinted at by exoplanet transit spectroscopy data. In this paper we provide the details of the calculations used to estimate the sensitivity of Roman with a starshade and describe the publicly available Python-based source code used to make these calculations. Given the fixed capability of Roman and the constrained observing windows inherent for the starshade, we calculate the sensitivity of the combined observatory to detect these three types of targets and we present an overall observing strategy that enables us to achieve these objectives. ###### keywords: planets, space optics, imaging ## 1 Introduction Space-based direct imaging is the next frontier of discovery for exoplanet science, moving beyond detection of Earth-like planets toward full characterization of their atmospheres, potentially identifying signs of life. The Nancy Grace Roman Space Telescope (hereafter referred to as Roman) is scheduled to launch in late 2025. While its Coronagraph Instrument (CGI) will reach planet-star flux ratios only achievable in space, with the capability to directly image gas giant planets around nearby stars, it does not have the sensitivity to detect Earth-like planets. With the launch of a companion starshade – a free-flying 26-m external occulter – such a goal can be reached within the next decade. The Starshade Rendevous Probe (SRP) mission concept was presented in a recent report [1] assessing the possibility to fly an external starshade. Using the CGI, the starshade significantly enhances the exoplanet observing capabilities of Roman by providing 1) improved contrast and a reduced inner working angle (IWA), enabling sensitivity to Earth-like exoplanets in the habitable zone, 2) an unlimited outer working angle, enabling a wider view of the planetary system around each target, and 3) higher throughput (the coronagraph masks are not needed with a starshade), enabling the sensitivity needed to spectrally characterize an Earth-like exoplanet. In this paper we provide the technical basis for the SRP study report [1] along with the simulations used to estimate sensitivity of the observatory. While many such assessments have been done in the past [2, 3, 4, 5], this study started by defining the science objectives of the mission with detailed knowledge of Roman constraints, CGI expected performance, and the significant advances of NASA’s S5 Starshade technology development program.aaahttps://exoplanets.nasa.gov/exep/technology/starshade/ While detailed simulations of Starshade missions exist [5, 6, 7], we found that the SRP has a relatively small number of targets, which imposes different demands. The need to understand how to balance the different objectives of the mission, within significant mission constraints, and how to prioritize targets led us to produce a simulations package focused on the SRP. This software is publicly available for reproduction of the results below and for comparison with similar simulations.bbbhttps://github.com/afromero/Starshade_Rendezvous_Probe_sims The paper is structured as follows. We briefly review the science objectives presented in the SRP report (§2) followed by and outline of the top-level mission constraints, based on its partnering with the Roman (§3). We then describe the model used to quantify the starshade performance – both the observatory characteristics and the expected astrophysical scene §4. Based on this observing plan, we next calculate the expected integration times required to achieve a given signal-to-noise, given Roman’s nominal optical design (§5). We use the observing model to calculate the sensitivity of the observatory toward individual targets (§5) and then provide an overall observing strategy, including choices on the number of stars, the number of visits per star, and when to switch from imaging to spectroscopy (§6). Last, in §7, we calculate the expected performance of the mission, with a specific list of target stars, summarizing in §8. ## 2 Science Objectives The Starshade Rendezvous Probe Mission Study Report [1] developed both the scientific motivation and technical feasibility for such a mission. Further detail on the science case can be found there; here we focus not on motivating the science goals, but rather on whether these goals can be achieved. In summary, the science objectives of the mission are three-fold: * • Objective 1: Habitability and Biosignature Gases & The Nearest Solar System Analogs. a) Determine whether Earth-like exoplanets exist in the habitable zones around the nearest sunlike stars and whether have they have signatures of oxygen and water vapor in their atmospheres. b) Detect and characterize planets orbiting the nearest sunlike stars. * • Objective 2: Brightness of Zodiacal Dust Disks. Establish if the zodiacal cloud of our inner Solar System is representative of the population of our nearest neighbor stars. * • Objective 3: Gas-Giant Atmospheric Metallicity. Determine the atmospheric metallicity of known cool giant planets to examine trends with planetary mass and orbital semi-major axis, and to determine if these trends are consistent with our Solar System. The first objective – finding Earth-like planets – is paramount. As such, maintaining the sensitivity to discover and potentially characterize Earth- like exoplanet candidates around these stars drives the observatory requirements. Meeting these challenging requirements means the observatory will also be capable of discovering and characterizing a wide range of planet types, from those like the giant planets in our Solar System, to the sub- Neptune planets commonly discovered by Kepler [8], and down to Earth-mass planets. The difficulty of imaging Earth-like planets necessitates a “deep dive” approach; that is, a detailed investigation of a relatively small sample of our nearest-neighbor sunlike stars – only the closest sunlike stars where starshade observations have both high imaging sensitivity for exoplanet discovery and high spectral sensitivity to characterize their atmospheric composition, while also allowing multiple visits to constrain the orbits of any habitable zone planets found (see [1] for a more detailed discussion). Planetary systems almost certainly exist around several of these stars [9]. If an Earth-like planet candidate is discovered around at least one of these stars, it will be spectroscopically observed to hunt for water vapor and oxygen. Three steps are needed to accomplish this goal: 1) initial detection via direct imaging, 2) habitable zone determination, by multi-epoch orbit tracing, and 3) atmosphere characterization, with spectroscopy of the most compelling candidate planets – particularly those in the habitable zone. Each of these steps places unique requirements on the mission parameters, as described in this paper. The requirement to detect Earth-like exoplanets also enables the ability to detect exozodiacal dust disks and spectral features of gas giants. The distribution of exozodiacal dust brightness, at the level relevant for Earth- like exoplanet detection, is largely unconstrained. Recent bounds on the warm dust disk brightness of many sunlike stars of interest to the SRP are provided by LBTI [10], but the sensitivity is still more than an order of magnitude in excess of what is needed. Objective 2 will provide the key information necessary to assess the sensitivity to directly image habitable zone exoplanets. Objective 3 will test whether the correlation between atmospheric gas metallicity and planet mass (as well as semi-major axis) observed in our Solar System, and hinted at in a few transit spectroscopy measurements of exoplanets [11], is a universal trend. Identifying this correlation with the SRP would provide evidence of common processes of planetary system formation. ## 3 Mission Constraints The Starshade Rendezvous Probe is an enhancement of the Roman mission. As such, many of the starshade mission’s parameters are fixed by Roman’s established telescope, instruments, and operational timeline. The key constraints imposed on a starshade mission are summarized in Table 1. ccc Note that all of the parameters in Table 1 are up-to-date at the time of this study. A full list of current Roman coronagraph characteristics can be found at: https://wfirst.ipac.caltech.edu/sims/Param_db.html#coronagraph_mode In particular, Roman’s Coronagraph Instrument (CGI) will be used for imaging and spectroscopy. The bandpass and spectral resolution for this instrument are designed for a similar science case (exoplanet imaging/spectroscopy) and, as such, are a good match to our science goals. The field of regard is limited by both the telescope and the starshade. Roman cannot point within 54∘ of the Sun or light will scatter into the telescope assembly. Solar angles greater than 83∘ are excluded by the starshade; beyond this limit the starshade can no longer be viewed close to face on without being illuminated by sunlight. Although this last point is not technically an imposed constraint, it is an important one imposed by the Starshade architecture and limits the target observing windows. The imaging end-to-end efficiency, discussed in more detail in §5, is the result of losses from the telescope aperture, optical path, with coronagraph masks excluded, and all the way down to the quantum efficiency of the detector. The Starshade observations depend on the CGI camera with little modifications, so we take this as an imposed constraint. The field of view of the CGI detector is limited to 4.5′′ (radial). This parameter is important for detecting any planets orbiting outside the habitable zone of the target of interest. Lastly, note that while Roman’s lifetime requirement is 5 years, we assume the starshade launch will occur 3 years into the mission, giving an overlap of 2 years. Extending beyond that duration is ultimately limited by the lifetime of the CGI camera and the overall Roman system. Table 1: Mission Constraints Parameter | Expected Performance ---|--- Starshade nominal mission lifetime | 2 years Telescope primary mirror | 2.4 m Solar exclusion angle (min) | 54∘ Solar exclusion angle (max) | 83∘ Detector bandpass | 400 – 1000 nm Imaging resolution | 65 mas at 750 nm Imaging end-to-end efficiency | 0.035a Imaging Field of view (FOV) | 4.5′′(radial) aafootnotetext: see Table 2 for details ## 4 Observing Model This section covers the observing model used to calculate the signal to noise ratio (SNR) of an exoplanet directly imaged with a telescope-starshade system. The overall observing model is shown as a flowchart in Figure 1. This model consists of both astrophysical inputs, shown in the red boxes, (§4.1) and instrument parameters, shown in the green boxes, (§4.2), which are combined to estimate the SNR (§4.3). The exoplanet direct imaging observing geometry of the telescope-starshade system is shown in Figure 2. The starshade blocks the starlight up to an inner working angle (IWA), above which exoplanets can be observed. A planet at radius $R_{pl}$ is observed with illumination phase angle $\beta$. For habitable zone exoplanets, defined by the inner and outer habitable zone (IHZ, OHZ) radii where water can exist in its liquid phase. The telescope-starshade system has a region of allowed Sun angles over which it can operate with the lower limit defined by the exclusion angle from the baffle of the telescope and the outer limit defined by reflection and scattering of sunlight off the starshade into the telescope baffle. Figure 1: The detailed observatory model used for the sensitivity estimates in this study. Single-visit completeness is determined by IWA, field of view (FOV), and SNR resulting from a planet observed with the SRP system. Red boxes are parameters entirely determined by nature, green boxes are parameters controlled by observatory design, and black boxes are a combination of both. The same model is used for spectral completeness by using the corresponding values for bandpass and end-to-end efficiency. Figure 2: Starshade observatory geometry. The telescope is pointed at a star with the starshade blocking the starlight. The inner working angle (IWA) is the angle subtended by the direction to the star and the outer radius of the starshade. Observations of planets in this region is excluded. The habitable zone of the star is defined by the inner habitable zone (IHZ) radius and outer habitable zone (OHZ) radius, defined by the distance to the star where the stellar irradiance allows for water in liquid state. A planet at radius $R_{pl}$ has illumination phase angle $\beta$ defined as the star-planet-telescope angle. The telescope-starshade system has a region of Sun angles at which it can observe. The lower limit is set by the solar exclusion angle of the telescope baffle while the upper limit is set by reflection of the Sun off the starshade’s surface on the telescope. These solar exclusion angles result in important observational constraints. ### 4.1 Astrophysical Model The signal is produced by the star’s flux density being reflected off the surface of the planet. The astrophysical sources of background light are the exozodiacal dust disk as well as our own Solar system’s zodiacal dust. For this subsection, we focus solely on the astrophysical model with the parameters provided in §5.1.1. #### 4.1.1 Star Flux Density We use a simple black body model for the star. The model inputs are the bolometric luminosity $L_{\star}$, blackbody temperature $T_{\star}$, stellar mass $M_{\star}$, and distance from Earth $d_{\star}$. The star’s spectral radiance is given by Planck’s law $B_{\star}(\lambda,T_{\star})=\frac{2hc^{2}}{\lambda^{5}}\frac{1}{\exp(hc/\lambda kT_{\star})-1}.$ (1) The flux density $F_{\star}$ at Earth in a band defined by limits $\lambda_{\mathrm{min}}$ and $\lambda_{\mathrm{max}}$ is given by $F_{\star}=\frac{L_{\star}}{4\pi d^{2}_{\star}}\frac{1}{\sigma_{\mathrm{SB}}T_{\star}^{4}}\int_{\lambda_{\mathrm{min}}}^{\lambda_{\mathrm{max}}}d\lambda\ B(\lambda,T_{\star})$ (2) where $\sigma_{\mathrm{SB}}$ is the Stefan-Boltzmann constant. The first fractional term in the equation is the total wavelength-integrated flux. The second fractional term is the normalization by the Stefan-Boltzmann law, which is the integral of Equation 1 over all wavelengths. The input data for the stars used in this study was obtained from ExoCat-v1dddM. Turnbull (2015), “ExoCat-1: The Nearby Stellar Systems Catalog for Exoplanet Imaging Missions”, arXiv:1510.01731, https://nexsci.caltech.edu/missions/EXEP/EXEPstarlist.html, which is a compilation of stars within 30 pc. The target selection is discussed in §5. #### 4.1.2 Planet Flux Density For bodies that are not self-luminous, the flux density depends entirely on the light reflected from its surface by the host star with flux density $F_{\star}$. The fraction of light reflected depends on the ratio of the projected disc area of the planet with radius $r_{\rm pl}$ and square distance from the star $R_{\rm pl}$. The dependence of the reflected light on the phase angle $\beta$ (i.e. the planet-star-observer angle) is assumed to be Lambertian $\Phi_{L}(\beta)=\frac{\sin\beta+(\pi-\beta)\cos\beta}{\pi}.$ (3) This assumption of isotropic scattering is approximately correct for cloudy gas giants [12, 13], but not for rocky planets that have enhanced forward scattering. Our assumed geometric albedo for Earth-like planets (0.2) only matches the Earth’s reflectivity for scattering angles $\beta\lower 3.0pt\hbox{${\buildrel<\over{\sim}}$}\ 90$∘; for more grazing angles, it is an underestimate[14, 15]. The function $\Phi_{L}(\beta)$ is normalized to unity at maximum illumination ($\beta=0$), takes the value of $1/\pi$ at maximum elongation ($\beta=\pi/2)$, and vanishes smoothly as the planet eclipses the star ($\beta\to\pi$). The normalization of the reflected light at maximum illumination, taking into account the $(r_{\rm pl}/R_{\rm pl})^{2}$ dependence and $\Phi_{L}(\beta)$, is the geometric albedo $A_{G}$ [16]. Altogether, the flux of a planet is given by $F_{\rm pl}(\beta)=A_{G}\left(\frac{r_{\rm pl}}{R_{\rm pl}}\right)^{2}\Phi_{L}(\beta)F_{\star}.$ (4) The model above applies to the entire range of planet types considered in this study (Earth, Super-Earths, Subneptunes, Neptunes, and Jovian planets) with different parameters. The parameters used for this study are provided in Section 5. #### 4.1.3 Dust The main sources of natural backgrounds are the sunlight scattered by zodiacal dust within our own Solar System and the starlight scattered by exozodiacal dust surrounding the exoplanet. Our model for the Solar System’s (SS) zodiacal dust is from [17], which considers variations with both wavelength and direction; the ecliptic latitude and longitude of each target star is taken into account. The exozodiacal dust brightness for a 1-zodi disk is set to 22 mag/arcsec2[18]. This is somewhat higher than the nominal brightness of the SS zodiacal dust itself (23 mag/arcsec2) because the exozodiacal dust is twice as thick (we only look through half of the SS zodiacal dust’s thickness) and because we view it at more forward-scattering angles [4]. While some of our target stars do have measured levels of exozodiacal dust, most are non- detections. Analysis of these upper limits suggests a median dust thickness a factor of 4.5 higher than the Solar System [10]. We adopt this enhanced level as our fiducial amount of exozodiacal dust. The exozodi model is scaled based on the ratio of stellar flux to solar flux and corrected for orbital location relative to Earth-equivalent insolation distance (EEID) given by 1 AU$\times(L_{\star}/L_{\odot})^{1/2}$. The flux density of SS zodiacal and exozodiacal dust backgrounds are proportional to the solid angle subtended by the point spread function (PSF) core ($\Delta\Omega_{PSF}$), which is inversely proportional to the telescope diameter. For a brightness distribution $dF/d\Omega$, the flux density is approximated by $F=\left(\frac{dF}{d\Omega}\right)\Delta\Omega_{PSF}.$ (5) ### 4.2 Observatory Model The observatory is a combination of the Roman telescope, including the CGI instrument, and starshade occulter. #### 4.2.1 Telescope The sensitivity of the telescope can be summarized in a single value $F_{sens}$, which is the flux that would produce a single photon count on average for a given integration time $T_{\mathrm{int}}$. This is given by $F_{sens}=\left[A\epsilon\Delta\lambda T_{\mathrm{int}}\frac{\lambda_{c}}{hc}\right]^{-1},$ (6) where $A$ is the geometric aperture of the telescope, based solely on its diameter, and $\epsilon$ is the end-to-end efficiency, which fully accounts for the fraction of photons entering the geometric aperture that produce photon counts in the detector. The product $\epsilon A$ is the effective area of the telescope. The bandwidth is given by $\Delta\lambda$ and $\lambda_{c}$ is central wavelength, $h$ is Planck’s constant $h$, and $c$ is the speed of light. The detector noise produces a photon-equivalent background count rate given by $N_{\mathrm{det}}$. The model described in the rest of this section relies heavily on the models used for CGI [19, 20]. Roman is a 2.4-m diameter optical telescope, corresponding to a collecting area of $A=4.5$ m2. The diameter spatial full-width at half maximum (FWHM) resolution is $\theta_{\rm PSF}=0.065$′′ at 750 nm. The pixel scale for the CGI instrument is 0.0218′′. The solid angle subtended by the PSF is approximated as $\Delta\Omega_{PSF}\simeq\pi\theta_{\rm PSF}^{2}$, which gives $\Delta\Omega_{PSF}\simeq 3.1\times 10^{-13}$ sr at 750 nm. The Starshade imaging bandpass filter (615-800 nm) is tuned to be sensitive to water vapor and oxygen absorption lines at 720 nm and 760 nm, respectively. The imaging field of view of 4.5′′ (imaging) enables observations well beyond the habitable zones of the nearest sunlike stars, providing the potential for discovery of giant outer planets. The focal plane detector is an electron-multiplying CCD (EMCCD). The detector noise is $\sim$10 counts/hour, including noise equivalent dark current (dominant term), clock induced charge, and read noise (negligible contribution). While the EMCCD detector has the advantage of no read noise, it is significantly degraded by cosmic ray hits; 5 years of degradation is included, reducing the detector quantum efficiency (QE) to 28.5% ${}^{\ref{ipacCGIsite}}$. The end-to-end efficiency ($\epsilon$), which includes the optical throughput of the telescope and the detector quantum efficiency (dQE), is 3.5% for imaging and 3.4% for spectroscopy. A complete budget for the factors going into these throughput calculations is provided in Table 2. The Roman pupil has a central obscuration that results in a reduction of the raw collecting area by 18%. The light collected at the aperture goes through multiple reflections in various elements of the telescope optics to deliver it to the CGI with a further reduction of 19%. Prior to reaching the CGI, a dichroic beam splitter divides the signal into the CGI and a guidance camera with 90% efficiency. Within the CGI, there are multiple optical elements prior to delivering the light to the detector with an efficiency of 60%. The detector effective quantum efficiency (QE) is a combination of effects including QE, cosmic rays, and other detector effects, of 28.5%. We use the end-of-life value since the starshade would operate in the last couple of years of the Roman telescope. The PSF has 34% of its total light in the core due to the diffraction from the struts and central obscuration region of the Roman aperture. The top-level instrument characteristics are summarized in Table 1. While the model in the previous subsection provides an adequate description for the wide-band imaging mode, the spectrograph requires some additional details. The currently planned implementation is a slit-prism spectrograph, which blocks all but a narrow region with a slit of width $D$ and disperses the light orthogonal to the slit’s long axis. The key parameter for the design is the slit width $D$ which must be wide enough to accommodate for telescope jitter and motion of the planet during a long period of observation while not being so wide that it allows for a significant amount of zodiacal and exozodiacal dust brightness background photons to disperse into the pixels of interest for the exoplanet. The prism-detector configuration is described by the spectral resolution parameter $R=\lambda/\Delta\lambda$, which can, in general, be wavelength dependent. Note that the spectrograph slit limits observations to one planet at a time, but still allows for simultaneous measurement of the background along the slit. Table 2: Telescope efficiency ($\epsilon$) Contribution | Best Estimate ---|--- Geometric obscuration of the WFIRST pupil | 0.82 Reflection losses in the telescope optics | 0.81 Reflection & transmission losses (excluding coronagraph masks) | 0.60 (P) | 0.58 (S) Starshade dichroic beam splitter | 0.90 Detector effective QE (at end of life) | 0.285 Core throughput losses due to diffraction from WFIRST pupil | 0.34 Total | 0.035 (P) | 0.034 (S) | | Note. — P = Photometry; S = Spectroscopy #### 4.2.2 Starshade The starshade performance can be summarized by three key parameters – the inner working angle (IWA), the instrument contrast ($C_{SS}$), and the solar exclusion angles (see Table 3). Table 3: Assumed Mission Parameters Parameter | Assumed Performance ---|--- Time allocation [1] | 136 days Inner working angle (IWA) | 100 mas Instrument contrast | $4\times 10^{-11}$ Delta-V for retargeting | 1,100 m/s | The starshade IWA is the angle subtended by the telescope boresight and the outer radius of the starshade. The starshade design considered here has $\sim$100% optical throughput at the IWA [21]. Although it is, in principle, possible to observe targets at angles below the IWA, the throughput is reduced and the PSF is distorted. For practical purposes, we assume exoplanets are only observable at angles above the IWA. The size of the starshade and the distance between the starshade and telescope are determined primarily by the desired inner working angle, the longest wavelength in the observing band, the diameter of the shadow at the telescope, and the required suppression level. The optical bandwidth, constraints on feature sizes, and factors such as the ratio of the petal length to the overall diameter, and the number of petals, are examples of other factors that also impact the size and separation of the starshade, see 22, 23, 24, 25). The IWA is chosen to observe the habitable zones of nearby sunlike stars. An IWA of 100 mas corresponds to a separation of 1 AU at 10 pc, enabling observations of Earth-like planets for solar-type stars within $\sim$10 pc. The starshade instrument contrast ($C_{SS}$) is defined as the fraction of starlight leaked per resolution element at the IWA. The resulting background flux is $F_{SS}=C_{SS}F_{\star}$. This results in a background contribution in the habitable zone with $C_{SS}$ specified so as to keep it below the expected zodi and exozodi backgrounds, enabling the sensitivity to detect Earth-like exoplanets. The current best estimate for the contrast is $C_{SS}=4\times 10^{-11}$ (see 1 for details). In addition to scattered starlight, solar glint and diffracted speckle patterns from the target result in an additional localized, predictable backgrounds that can be calibrated. While solar glint can be a limiting factor, recent progress in petal edge design indicates that it can be suppressed outside the IWA to negligible levels. We therefore do not include a treatment of solar glint in this study; for a more detailed treatment see 26. The starshade speckle pattern and its ability to remove it during analysis are not included in this study. It is expected that they become an important source of backgrounds These effects will be accounted for in more detail with more recent tools such as the SISTER simulations package and upcoming Starshade data challenges. The starshade can accommodate a relatively wide bandpass (26%), compatible with the bandpass of the CGI (Table 1). The relation between starshade design and bandpass is discussed in 23. One important observing constraint is that, with a starshade, the telescope pointing is limited to at most 83∘ from the Sun. Beyond this solar exclusion angle, the starshade reflects a significant amount of sunlight into the telescope. ### 4.3 Signal-to-Noise Ratio Model The signal to noise ratio $SNR$ is approximated as a function of the number of photon counts from the planet of interest $n_{\rm pl}$ and the sum total contribution of background photon counts $n_{\rm bkg}$ according to $SNR=\frac{n_{\rm pl}}{\sqrt{n_{\rm pl}+n_{\rm bkg}}}$ (7) The conversion of the planet flux $F_{\rm pl}$ to photon counts $n_{\rm pl}$ is given by the observatory’s single photon flux sensitivity $F_{\rm sens}$ (Equation 6) according to $n_{\rm pl}=F_{\rm pl}/F_{\rm sens}$. The background photon counts, $n_{\rm bkg}$, have two main contributions, the external background fluxes (leaked starlight, zodiacal and exozodiacal light) and the detector noise counts $N_{\rm det}$. The external background fluxes are converted to background photon counts via the relation to the observatory’s single photon flux sensitivity to give $n_{\rm bkg}=\frac{F_{ez}+F_{z}+F_{SS}}{F_{\rm sens}}+N_{\rm det}$ (8) The modifications to the SNR estimate for spectroscopy are listed as follows. The bandwidth now corresponds to each sub-band of the spectrometer, which is given by $\Delta\lambda=\lambda/R$. With a characteristic value of $R\sim 50$, the bandwidth is at the central wavelength of 750 nm is 15 nm instead of 185 nm. This reduces the single photon flux sensitivity $F_{sens}$ by roughly an order of magnitude compared to imaging mode. The leaked starlight as well as the zodiacal and exozodiacal emission can be dispersed into the planet spectrum. If $\theta_{\mathrm{slit}}$ is the angular width of the slit, the effective increase in the background photon counts $n_{bkg}$ is a factor of $\theta_{\mathrm{slit}}/\theta_{PSF}$ $n_{bkg}=\frac{\theta_{slit}}{\theta_{PSF}}\frac{F_{ez}+F_{z}+F_{SS}}{F_{sens}}+N_{det}$ (9) under the assumption that $\theta_{\mathrm{slit}}\geq\theta_{PSF}$. A slit width of 120 mas is assumed for the Starshade slit prism spectrometer. The reason the slit has to be wider than the PSF core of 65 mas is to accommodate the a priori unknown motion of an Earth-like exoplanet over the data lag of several days nominal spectral integration time period of 25 days. We are assuming that during a spectroscopic observation the slit position can be adjusted over a period of several days given data telemetry, analysis, and commanding latencies. The estimates made here assume that the leaking starlight, stray light from the starshade, and exozodi light have been approximated using smooth distributions. The treatment of deviations from these assumptions requires more sophisticated imaging simulation tools (such as SISTER) and exploring a wider range of second order corrections to be considered. These will be treated in future mission concept studies. ## 5 Target Sensitivity The analysis presented in this section estimates the performance of the observatory on a per-target basis and has not assumed any constraints on retargeting time or total mission duration, which are covered in the next section. This serves as an initial bound on how many targets the observatory is sensitive to and sets a clear goal for the more complicated problem of visit strategy and retargeting maneuvers. We divide the discussion between three types of targets, corresponding to the three objectives described in §2: 1) potential Earth-like planets orbiting bright nearby stars (§5.1), 2) known exoplanets from radial velocity measurements at wide angular separation from their host star (§5.2), and 3) dust disks that may surround any of the target stars (§5.3). ### 5.1 Sensitivity to Earth-like Planets #### 5.1.1 Parameters The parameters that define the Star’s flux density (Equation 2) and mass are taken from ExoCat [27]. Table 5 shows the star parameters for targets selected by the procedure defined later in this section. The terrestrial planet parameters in our model (Equation 4) are assumed to have an Earth-like geometric albedo of $A_{G}=0.2$, based on distant observations of Earth with the EPOXI spacecraft [16], along with the Lambertian (isotropic) scattering phase function in Equation 3. The range of planet radii considered is bounded above at $r_{\rm pl}\leq 1.4$ $r_{\oplus}$, based on evidence that suggests that planets with radius below are predominantly rocky [28]. The lower bound on terrestrial planet radii depends on the planet’s ability to retain an appreciable atmosphere, which, in turn depends on their stellar illumination. This results in a dependence on the planet’s semi-major axis $R_{\rm pl}$, modified by the stellar luminosity to give $r_{\rm pl}/r_{\oplus}\geq 0.8(R_{\rm pl}/{\rm AU})^{1/2}(L_{\star}/L_{\odot})^{-1/4}$ [29]. These maximum and minimum radii serve as the defining limits for Earth-like planets in the simulation results presented here. The adopted ranges of planetary parameters for habitable zone planets (defined here) and gas giant planets (§5.3 below) are summarized in Table 4. The orbital location of Earth-like planets is defined as the habitable zone (HZ) – the region around a star where a rocky planet with a thin atmosphere may have liquid water on its surface. The location of the habitable zone depends both on the stellar luminosity and on assumptions for the planet’s cloud properties. We adopt a conservative estimate for the habitable zone orbital radii $R_{pl}$ from 0.95 to 1.67 AU for a Solar-luminosity star [30, 29]. These orbital radii scale by the square root of the stellar luminosity, to keep the same insolation range as the Solar System. Table 4: Assumed Planet and Dust Properties Parameter | Value (or [Range]) ---|--- Earth-like planet geometric albedoa ($A_{G}$) | 0.2 Earth-like planet radius ($r_{p}$) | $[0.8(R_{\rm pl}/{\rm AU})^{1/2}(L_{\star}/L_{\odot})^{-1/4}$, 1.4] $r_{\oplus}$ Habitable Zone ($R_{\rm pl}$) | $[0.95,1.67](L_{\star}/L_{\odot})^{1/2}$ AU Gas-giant planet geometric albedo ($A_{G}$) | 0.3 Gas-giant planet radiusb ($r_{p}$) | Ref. 31 ($r_{\rm Jup}$ max) Zodiacal dust brightness ($dF_{z}/d\Omega$) | Ref. 17 Exozodi dust brightnessc ($dF_{ez}/d\Omega$) | 4.5 zodi | aafootnotetext: For the assumed isotropic scattering, this geometric albedo is equivalent to 0.3 spherical albedo. bbfootnotetext: While the radius depends on the mass of the gas giant planet, we set a conservative upper limit of $r_{\rm Jup}$. ccfootnotetext: The unit of 1 zodi is equivalent to 22 mag/arcsec2. Planet sizes and semi-major axes are drawn randomly from these defined ranges for Earth-like planets, based on the distribution defined by SAG-13 [32] and modified by HabEx to include the dependence of the orbital semi-major axis on the lower limit of planet radii. This is determined by drawing from the distribution defined by $\frac{\partial^{2}N(r_{\rm pl},P)}{\partial\ln r_{\rm pl}\ \partial\ln P}=0.38\,r_{\rm pl}^{-0.19}\,P^{0.26}$ (10) where the orbital period $P$ defines the orbital radius $R_{\rm pl}$ by way of the stellar mass $M_{\star}$ using Kepler’s third law. For Earth-like exoplanets, the orbits are assumed to be circular, consistent with most previous studies, e.g. 5. The estimates made on target sensitivity take into account target availability windows (§5) determined by solar exclusions angles along with the Keplerian motion of the planet and the associated changes in planet brightness. For the exozodiacal dust environment, we assume a constant fiducial value of 4.5 zodi based on median dust thickness a factor derived from LBTI limits and measurements [10]. Two targets of interest, epsilon Eridani and Vega, have measurements of warm dust disk brightness of 300 zodi and 33 zodi, respectively. These values are well in excess of 10 zodi, which significantly increases the integration time for detection of Earth-like planets and the risk of contamination from planet-induced disk structure [33, 34]. As such, these targets have been removed from the list that was presented in the Starshade Rendezvous Probe study report [1]. #### 5.1.2 Treatment of Binaries Nearby optical companions to potential target stars can create light leakage comparable to the starshade instrument contrast depending on their relative brightness and separation. Diffracted flux at an angle $\Theta$ away from a companion star can be approximated as $F/F_{0}\simeq 4/(\pi x^{3})$, where $x\equiv\pi\Theta/\lambda D$. (This formula is just the large-angle approximation for an Airy diffraction pattern; the error in this approximation is $<$1% beyond the third Airy ring.) Figure 3 shows the angular separation and difference in magnitude for nearby binary stars (those with $V<$ 5 mag and distance $<$ 8.5 pc). Diffraction from the secondary star creates additional background flux near the primary (shown for $10^{-11}$, $10^{-10}$, and $10^{-9}$ contrast levels). Stars with excessive levels of background (relative to the starshade contrast floor of $4\times 10^{-11}$) are dropped from the target list (open circles in Figure 3). We note that mu Hercules was in the target list used for the Starshade Rendezvous Probe study report [1] but is removed from this updated list due to contamination from its nearby optical companion. Some wide binaries still remain as viable targets (e.g. Procyon, with a 10-magnitude-fainter white dwarf companion at 4.3′′ separation), shown as filled circles. While these companions are typically at 100’s of AU separation, Procyon B orbits at only 15 AU (with periapse of 9 AU); this relatively close orbit could impact the formation and evolution of habitable zone planets. Note that the background contamination considered here is idealized as solely due to diffraction. Optical aberrations will contribute additional scattering. For the Roman telescope, 35 estimate that these aberrations could increase the effective contrast limit by $\sim$1–2 orders of magnitude, such that borderline systems (Procyon and Sirius) would have their imaging performance significantly degraded. Very wide/faint binaries (Fomalhaut, eps Ind, bet CVn) still have insignificant contribution, even with the telescope aberrations included. Figure 3: Angular separation and difference in brightness are shown for all binaries that are potential targets. Diffraction from the secondary star produces a background contrast level of $4\times 10^{-11}$ (i.e. comparable to our instrumental performance) along the dashed line. Binaries with high levels of binary contamination (i.e. those to the left of the line) are shown as open circles, while those that are still viable targets are filled circles. #### 5.1.3 Background Contributions Each target has several sources of background noise that limits the sensitivity to Earth-like exoplanets. It is illustrative to show the relative contributions from each source as this determines which targets are viable for habitable exoplanet observations. In Figure 4 we show the contributions of photon counts, assuming 1 day of integration time in imaging mode under the following assumptions. We assume an Earth-like exoplanet at EEID in quadrature phase. The exozodiacal dust disk brightness has a fiducial value of 4.5 zodi. The leaked starlight assumes an instrument contrast $C_{SS}=4\times 10^{-11}$ everywhere, which is conservative since the leaked starlight generally decreases away from the IWA. The Solar System zodi background is shown as bars indicating the range of values it can take depending on when the observation is made. Finally, the detector noise contribution is shown as a dashed line. The targets are ordered by the brightness of the Earth-like exoplanet. For most targets, the exozodiacal dust disk brightness dominates the background followed by the Solar System’s zodiacal dust disk brightness. The leaked starlight at the inner working angle can be stronger in cases where the star is very bright. Note, however, that for these stars the habitable zone will be pushed out to radii typically much higher than the IWA, where the leaked starlight drops. The detector noise counts lie below the contribution of Solar System’s zodiacal dust. The target selection is based on search completeness, discussed in more detail the next subsection, which depends on the target availability windows, the field of view available around the star, and the range of parameters sampled for terrestrial exoplanets. While the photon counts estimated in Figure 4 do not capture all these details, they do provide a sense of which targets will provide the highest sensitivity to Earth-like exoplanets. Figure 4: Photon counts due to each contribution in the SNR model for the 16 nearby stars that provide the best sensitivity to Earth-like exoplanets in their habitable zones. While the Earth-like planet flux densities can vary widely depending on radius, distance to the star, and illumination phase angle, we provide estimates for a planet with Earth parameters at EEID in quadrature phase. In most cases, the exozodiacal dust disk brightness, with a fiducial brightness of 4.5 zodi, dominates the background photon contribution. The leaked starlight flux, assuming an instrument contrast $C_{SS}=4\times 10^{-11}$ is, in most cases, comparable to the Solar System zodiacal (SS Zodi) dust brightness. The vertical bars of SS Zodi represent the variation depending on the time of year the target is observed. The detector noise is not a major contributor to the error budget. ### 5.2 Observing Windows The target availability windows with the Starshade/Roman system is an important constraint on the observatory’s ability to spectrally characterize and determine the orbits of Earth-like exoplanets. The Starshade/Roman system can only observe stars between 54∘ and 83∘ from the Sun. For a star in the ecliptic plane, this limited visibility results in two observing windows per year, each $\sim$30 days long. With increasing ecliptic latitudes the windows become significantly longer, until they merge at $54^{\circ}$ to produce a single yearly window lasting several months long, then decreasing until it vanishes above $83^{\circ}$. Stars very close to an ecliptic pole (e.g. chi Dra at 83.6∘ ecliptic latitude) are never observable. The sky position and observing windows for all of our habitability and biosignature targets are summarized in Figure 5, divided into the nearby-star planet search (upper panel) and the known-exoplanet sample treated in §5.2 (lower panel). The sky position and observing windows for all of our gas-giant atmospheric metallicity targets (treated in §5.3) are shown in Figure 5. Figure 5: Top: Sky positions of the Habitability and Biosignature Gases targets in ecliptic coordinates. An instantaneous observing region due to solar exclusion angles (54∘ and 83∘, respectively) is shown as a light red shaded region centered on $0^{\circ}$ ecliptic longitude. Bottom: Target star observing windows as constrained by telescope and starshade solar exclusion angles. These windows result from the instantaneous observing region in the panel above shifting in ecliptic longitude with a yearly period. Each star typically has two $\sim$30-day-long observing windows per year, while higher- latitude stars have a single observing window per year that is longer in duration. For the sample of nearby stars to be searched for Earth-like planets (upper panel), the black dots correspond to the desired observation start times, to allow for sufficient time for a spectral characterization. Figure 6: Top: sky position of known exoplanets for the gas-giant atmospheric metallicity investigation in ecliptic coordinates (§5.3). An instantaneous observing window due to solar exclusion angles (54∘ and 83∘, respectively) is shown as a light blue shaded region centered on $0^{\circ}$ ecliptic longitude. Bottom: Target star observability windows as constrained by telescope and starshade solar exclusion angles. The limited observing windows for each target provide a primary constraint on our observing strategy. During the 2-year lifetime of the mission, there will generally be 4 opportunities to observe each target. While spectral characterization can be performed with only a single visit with favorable illumination phase, multiple epochs are needed to constrain the planet’s orbit, in particular its semi-major axis. The semi-major axis indicates the average amount of stellar radiation received from the parent star and thereby determines whether the planet is in the habitable zone. Determining the semi-major axis with sufficient accuracy requires at least three astrometric measurements of the planet’s position spread out over two years. An example of an orbit reconstruction simulation is shown in Figure 7. Earth-like exoplanets in the HZ are generated with randomly sampled Keplerian orbital parameters with the planet’s phase-varying brightness and associated astrometric precision, defined as the telescope resolution divided by imaging SNR. The simulated observations are then reconstructed with a Markov chain Monte Carlo (MCMC) that forward models the simulated data. An ensemble of these simulations for each of our target stars demonstrates that Earth-like planets can typically be constrained to the habitable zone with $>$80% confidence [36]. Figure 7: Four visits of Tau Ceti, as planned with Starshade Rendezvous Probe (Figure 5), with three detections are sufficient to constrain orbits to the habitable zone (shown in dashed green lines). The left panel shows a simulation of an Earth-like planet with a circular orbit. True positions marked in blue circles and astrometric estimates with error bars shown in red. The numbers indicate the visit number for each observation. The inner working angle (IWA) is shown in gray. On the right, sample orbits (gray lines) from a MCMC-reconstructed posterior distribution demonstrate that the fit is well within the HZ, including uncertainties in the orbit eccentricity. The true orbit is shown in orange. #### 5.2.1 Search Completeness and Target Selection Our primary metric for evaluating the observatory performance for each star is target completeness – the fraction of habitable zone planets that can be completely characterized. In order to determine whether a planet is habitable, we need to be able to detect it, constrain its orbit to know that it is indeed in the habitable zone, and take a spectral measurement to determine whether it has an atmosphere with biosignature gases. We therefore estimate the following completeness values: * • 1) single-visit completeness, the fraction of habitable zone planets that can be effectively imaged at any one time (defined as SNR$>$7 within a 1-day integration)eee Note that the SNR threshold was changed from 5 in the Probe Study Report [1] to 7 in this study to reduce the probability of false positives. This values is consistent with the detection threshold used in the HabEx study report., * • 2) orbit determination completeness, the fraction of observed planets whose orbits are in the habitable zone (assuming 4 observing epochs), * • 3) spectral characterization completeness, fraction of imaged planets whose spectra can identify key atmospheric constituent (SNR$>$20 within a 25-day integration), and * • 4) target completeness, the fraction of observed planets that meet conditions 2 and 3 above. The single visit completeness serves as a first cut to identify targets where Earth-like exoplanets have a high probability of being detected[37]. It is important to note that if a planet is not detected in a single visit, it does not mean it is absent since single visit completeness with a Starshade and CGI is $\lesssim 0.70$ (Table 5). It is equally important to note that a single detection of a planet in a region consistent with the habitable is not enough to conclude that it is indeed a habitable zone exoplanet. Follow-up observations that constrain the planet’s orbits are necessary to determine that. The orbit determination completeness is the probability that the planet’s orbit can be constrained to be in the habitable zone. In a separate study [36], it was determined that 3 detections in 4 visits to the target was sufficient to constrain the orbit of a habitable zone exoplanet with $>$80% confidence, depending on the orbital inclination and the phase of observation. The orbit determination completeness, in this study, is the probability that at least 3 detections occur with 4 visits. The simulations in that study sample the orbit periods and observation windows assumed here and performs a Markov Chain Monte Carlo fit of the observations to estimate the posterior distribution of the planet’s semi-major axis. The number of visits is limited by the lengths and periodicity of the target availability windows for most stars of interest (see Figure 5). The spectral characterization completeness is the probability that a spectroscopic observation of a target is successful in any one of 4 visits. The criteria of success (SNR$>$20) is based on a study by 38, which found that this is was the minimum needed for detection of molecular oxygen and water vapor lines in the CGI band. The 25-day integration time window is the typical maximum for most targets, although some are available for significantly longer. The target completeness requires all criteria above are met; it is the probability that the orbit constraint requirements and a spectral observation is achieved for a nearby star. For each system, we calculate the completeness with a Monte Carlo sampling of habitable zone orbits. We sample random semi- major axes (using Equation 10), orbital inclinations (cosine distributed), and true anomaly (uniformly distributed for a circular orbit). Circular orbits are assumed. Each randomly selected planet is propagated along its Keplerian orbit, with the time of observation limited to 4 observing windows spaced over 2 years (see Figure 5 in §5.2). A list of 16 stars for finding Earth-like planets are listed in Table 5 and summarized in Figure 8, which shows simulation results for single visit, orbit determination, spectral characterization, and the overall target completeness for each targets. Table 5: Nearby stars targeted for Earth-like planets Star | Distance | $V$ | $L_{\star}$ | $T_{\rm eff}$ | $M_{\star}$ | Spectral | Completeness ---|---|---|---|---|---|---|--- Name | (pc) | (mag) | ($L_{\odot}$) | (K) | ($M_{\odot}$) | Type | Single-visit | Orbit | Spectral | Overall tau Cetibc | 3.7 | 3.5 | 0.52 | 5283 | 0.80 | G8.5V | 0.67 | 0.55 | 0.79 | 0.48 Procyona | 3.5 | 0.4 | 7.1 | 6543 | 1.49 | F5IV-V | 0.65 | 0.54 | 0.55 | 0.43 eps Indac | 3.6 | 4.7 | 0.23 | 4683 | 0.68 | K4V | 0.67 | 0.52 | 0.74 | 0.42 Siriusa | 2.6 | $-$1.4 | 30.5 | 9580 | 2.40 | A1.0V | 0.58 | 0.52 | 0.25 | 0.25 omi 2 Eric | 5.0 | 4.4 | 0.42 | 5151 | 0.81 | K0.5V | 0.65 | 0.51 | 0.21 | 0.13 Altair | 5.1 | 0.8 | 10.7 | 7800 | 1.83 | A7IV-V | 0.58 | 0.52 | 0.10 | 0.09 del Pav | 6.1 | 3.5 | 1.3 | 5590 | 0.99 | G8.0IV | 0.64 | 0.55 | 0.07 | 0.05 82 Eric | 6.0 | 4.3 | 0.69 | 5401 | 0.85 | G8.0V | 0.60 | 0.38 | 0.02 | 0.01 sig Dra | 5.8 | 4.7 | 0.44 | 5246 | 0.80 | G9.0V | 0.55 | 0.39 | 0.00 | 0.00 bet Hyi | 7.5 | 2.8 | 3.7 | 5873 | 1.14 | G1IV | 0.58 | 0.51 | 0.00 | 0.00 bet CVna | 8.4 | 4.2 | 1.3 | 5930 | 1.03 | G0V | 0.43 | 0.13 | 0.00 | 0.00 1 Ori | 8.1 | 3.2 | 3.0 | 6424 | 1.24 | F6V | 0.50 | 0.30 | 0.00 | 0.00 Fomalhautab | 7.7 | 1.2 | 16.5 | 8399 | 2.05 | A3V | 0.46 | 0.43 | 0.00 | 0.00 del Eri | 9.0 | 3.5 | 3.4 | 5095 | 1.19 | K0IV | 0.46 | 0.25 | 0.00 | 0.00 gam Lep | 8.9 | 3.6 | 2.5 | 6372 | 1.27 | F7V | 0.44 | 0.21 | 0.00 | 0.00 zet Tuc | 8.6 | 4.2 | 1.3 | 5948 | 1.01 | G0V | 0.42 | 0.14 | 0.00 | 0.00 | | | | | | | | | | aafootnotetext: Binary (see Figure 3) bbfootnotetext: Known debris disk ccfootnotetext: Known to have planet(s) Figure 8: Completeness estimates are shown for the detection of Earth-like planets around nearby stars. The simulation results are sorted from best target completeness (top) to least (bottom). The overall target completeness is shown in black, while its composite factors are the single-visit completeness (orange), orbit determination completeness (blue), and spectral characterization completeness (green). Most of the habitable zone is visible for all the targets, with single-visit completeness ranging from $\sim$0.5 to $\sim$0.7 (red bars in Figure 8). Most of the planets that are visible in their habitable zone can also have their orbits traced over multiple epochs, resulting in orbit determination completenesses of ${\buildrel>\over{\sim}}$ 0.5 for most stars (blue in Figure 8). The ability of the observations to constrain each planet’s orbit will be described in a companion paper [36]. The predominant limiting factor is the spectral characterization completeness (green in Figure 8), which varies by orders of magnitude between stars. Only the brightest stars provide enough photons to produce a high-quality reflected light spectrum within the integration time limits. For the best targets, spectral characterization completeness can be as high as $\sim$0.8 but as the expected planetary reflected light flux density decreases, 25 days is not sufficient to achieve a spectral SNR$>$20\. Although these fainter planets will not meet our primary science objective, the probability of detection and orbit constraint is still very high, and lower-SNR spectra will still be sensitive to some atmosphere types. This will be the subject of a future investigation. Of the total of 8 stars that have non-zero target completeness, tau Ceti has the largest ($\lower 3.0pt\hbox{${\buildrel>\over{\sim}}$}\ 0.5$). The remaining targets still have a significant completeness for detection and orbit determination of Earth-like exoplanets. These stars are of interest for reconnaissance of planets in orbit and for observing their exozodiacal dust disk brightness in preparation for more sensitive observatories in the future. Table 6: Known super-Earth exoplanet target list Planet | distance | $V$ | $L_{\star}$ | $T_{\rm eff}$ | $M_{\star}$ | Spectral | $M_{p}$ | $a_{p}$ | $F_{p}/F_{\star}$a | Integration time (days)b ---|---|---|---|---|---|---|---|---|---|--- Name | (pc) | (mag) | ($L_{\odot}$) | (K) | ($M_{\odot}$) | Type | ($M_{\oplus}$) | AU | (mas) | ($\times 10^{-9}$) | $\beta$=45∘ | $\beta$=90∘ | $\beta$=135∘ tau Ceti e | 3.7 | 3.49 | 0.5 | 5283 | 0.8 | G8.5V | 3.9 | 0.54 | 147 | 1.86 | 0.14 | 0.44 | 9.3 tau Ceti f | 3.7 | 3.49 | 0.5 | 5283 | 0.8 | G8.5V | 3.9 | 1.33 | 365 | 0.30 | 1.9 | 8.4 | - aafootnotetext: Planet-star flux ratio for a half-illuminated planet ($\beta$=90∘). bbfootnotetext: Integration times to reach SNR=20 for an R=50 spectra, over a range of illumination phase angles $\beta$. Note. — Dashes indicate integration times in excess of 25 days. Note that tau Ceti has two already-discovered super-Earth planets that are widely separated and bright enough to have their atmospheres characterized [39]. Integration times to obtain spectra are listed in Table 6 as a function of each planet’s illumination phase. Other than the observing window limitations on integration time, the analysis presented here is on a per-target basis and has not assumed any constraints on retargeting time or total mission duration. Nevertheless, there is a effective upper bound on number targets set by our requirements (orbit determination and spectral characterization). While the number of target stars could be increased with a greater allocation of telescope time, the integration time needed to achieve a successful spectral measurement is limited by the solar exclusion angles. The Roman Space Telescope CGI with starshade is therefore more limited by sensitivity than it is by telescope time allocation. ### 5.3 Sensitivity to Gas-Giant Planets Next we consider known gas-giant planets as targets for atmospheric characterization. The goal here is to determine whether there is a correlation between atmospheric metallicity and fundamental planetary properties such as mass and semi-major axis (Figure 9). A strong correlation is found in Solar System gas giants and there have been indications of such a correlation in exoplanet data, although the uncertainties in the atmospheric metallicity of exoplanets are still high (Figure 9). Measurements of the Methane absorption line serve as our primary proxy for the atmospheric metallicity, allowing for direct comparison with the Solar System’s outer planets [40]. Our quantitative objective is to achieve a measurement in the correlation between planet mass and atmospheric metallicity with at least 3$-\sigma$ significance. Figure 9: The correlation of atmospheric metallicity and planet mass. The data, with uncertainties, is shown for the Solar System (black bars) and exoplanet transit spectroscopy measurements [11] (red bars). The green points show a random sample of 10 known gas giant exoplanets that could be observed with Starshade Rendezvous Probe assuming they follow the same correlation with 30% fractional uncertainty in atmospheric metallicity. To determine if such a correlation is present in a population of gas giant exoplanets orbiting different stars, we need to establish how many are needed and with what level of uncertainty in metallicity. There are currently 20 gas giant exoplanets with orbital angular separations accessible to the Starshade Rendezvous Probe (from 0$.\\!\\!^{\prime\prime}$13 to 3$.\\!\\!^{\prime\prime}$2; Table 7). Since it is unrealistic to expect all of them will be at an orbital phase favorable for spectral measurements, we looked at randomly sampled subsets that might be available. We find that if a subset of 10 stars is available with a 30% metallicity fraction uncertainty, then it is possible to discern a mass-metallicity correlation with 3-$\sigma$ statistical significance (Figure 10). The 30% metallicity fractional uncertainty can be achieved with spectral SNR$>$15 measurements in one or two bands from $\sim$600 to $\sim$800 nm [41, 42]. Table 7: Known gas-giant exoplanet target list Planet | distance | $V$ | $L_{\star}$ | $T_{\rm eff}$ | $M_{\star}$ | Spectral | $M_{p}$ | $a_{p}$ | $F_{p}/F_{\star}$a | Integration time (days)b ---|---|---|---|---|---|---|---|---|---|--- Name | (pc) | (mag) | ($L_{\odot}$) | (K) | ($M_{\odot}$) | Type | ($M_{\rm Jup}$) | AU | (mas) | ($\times 10^{-9}$) | $\beta$=45∘ | 90∘ | 135∘ bet Gem b | 10.4 | 1.16 | 40.9 | 4850 | 2.6 | K0IIIvar | 2.30 | 1.64 | 158 | 10.9 | $\leq 0.1$ | $\leq 0.1$ | $\leq 0.1$ gam Cep b c | 14.1 | 3.21 | 11.8 | 4761 | 1.9 | K1IV | 1.85 | 2.05 | 145 | 7.13 | $\leq 0.1$ | $\leq 0.1$ | 0.5 ups And d | 13.5 | 4.09 | 3.6 | 6213 | 1.3 | F8V | 4.13 | 2.51 | 186 | 4.43 | $\leq 0.1$ | 0.3 | 7 eps Eri b | 3.2 | 3.71 | 0.4 | 5146 | 0.9 | K2.0V | 1.55 | 3.39 | 1055 | 2.65 | $\leq 0.1$ | 0.3 | 7 47 UMa b | 14.1 | 5.03 | 1.7 | 5882 | 1.1 | G0V | 2.53 | 2.10 | 149 | 6.57 | $\leq 0.1$ | 0.6 | 16 47 UMa c | 14.1 | 5.03 | 1.7 | 5882 | 1.1 | G0V | 0.54 | 3.60 | 255 | 2.59 | 0.8 | 3.5 | - HD 192310 c c | 8.9 | 5.72 | 0.4 | 5080 | 0.8 | K3V | 0.08 | 1.18 | 132 | 3.29 | 0.6 | 2 | - HD 219134 h c | 6.5 | 5.57 | 0.3 | 4835 | 0.8 | K3.0V | 0.34 | 3.11 | 475 | 2.79 | 0.7 | 3 | - HD 39091 b | 18.3 | 5.65 | 1.6 | 5950 | 1.1 | G1V | 10.02 | 3.10 | 169 | 2.67 | 1 | 6 | - HD 114613 b | 20.7 | 4.84 | 4.5 | 5782 | 1.2 | G3V | 0.36 | 5.34 | 258 | 1.00 | 2 | 10 | - HD 190360 b | 15.9 | 5.73 | 1.2 | 5552 | 1.0 | G6IV | 1.54 | 3.97 | 250 | 1.93 | 3 | 15 | - HD 160691 c | 15.5 | 5.12 | 2.0 | 5784 | 1.1 | G3IV/V | 1.81 | 5.24 | 337 | 1.09 | 3 | 15 | - 14 Her b | 17.6 | 6.61 | 0.7 | 5388 | 1.1 | K0V | 4.66 | 2.93 | 166 | 3.20 | 4 | 19 | - 55 Cnc d | 12.3 | 5.96 | 0.7 | 5235 | 1.0 | G8V | 3.88 | 5.50 | 445 | 0.92 | 11 | - | - HD 154345 b | 18.6 | 6.76 | 0.7 | 5468 | 0.9 | G8V | 0.82 | 4.21 | 226 | 1.80 | 21 | - | - HD 217107 c | 19.9 | 6.16 | 1.2 | 5704 | 1.1 | G8IV/V | 2.60 | 5.32 | 267 | 1.02 | 20 | - | - HD 142 c | 25.7 | 5.70 | 3.0 | 6249 | 1.2 | F7V | 5.30 | 6.80 | 264 | 0.58 | 16 | - | - eps Ind A b c | 3.6 | 4.69 | 0.2 | 4683 | 0.7 | K4V | 3.25 | 11.55 | 3188 | 0.21 | 14 | - | - GJ 229 b c | 5.8 | 8.15 | 0.1 | 3709 | 0.5 | M1.5V | 0.03 | 0.90 | 156 | 1.66 | 12 | - | - GJ 832 b | 4.9 | 8.67 | 0.0 | 3601 | 0.4 | M1.5V | 0.68 | 3.56 | 719 | 2.56 | 24 | - | - aafootnotetext: Planet-star flux ratio for a half-illuminated planet ($\beta$=90∘). bbfootnotetext: Integration times to reach SNR=15 for an R=50 spectra, over a range of illumination phase angles $\beta$. ccfootnotetext: Planet not listed in the SRP study report [1]. Note. — Dashes indicate integration times in excess of 25 days. Figure 10: Our ability to fit the mass-metallicity relationship among known gas-giant planets depends on the accuracy of the individual metallicity measurements. For several sets of 10 random-selected targets, the statistical significance of the Pearson correlation coefficient is estimated over a range of metallicity fractional uncertainties (purple lines). The median result is in black, with a gray band as the full range of uncertainties in the Pearson correlation significance. We find that a 30% metallicity uncertainty is sufficient to assess the mass-metallicity relationship. The gray band shows the uncertainty on the Pearson correlation significance corresponding to the single instance represented by the thick black line. The mission requirements based on the driving investigation (detecting Earth- like planets) enable measurement of known giant-planet metallicities. Integration times depend on the illumination phase angle during observation; while the orbital phase for radial-velocity-detected planets is known, the orbital inclination is uncertain. Table 7 shows the integration times required to achieve spectral SNR$>$15 for a range of illumination phase angles. A geometric albedo of 0.3 is assumed, with radius constrained by the mass-size relation of 31 but conservatively capped at 1 Jupiter radius. This albedo is a conservative choice, well below Jupiter’s actual value of 0.5 [43, 12], but consistent with models of Jupiter-mass planets located closer to the Sun [12, 41]. Note that the expected integration times have changed somewhat from what was reported in the Probe study report [1] due to changing the geometric albedo from 0.5 to 0.3 and the shift from the integral field spectrometer to the slit prism spectrometer, which changed the current best estimate of the end-to-end efficiency from 2.5% to 3.4%. Integration times are on the order of several days for the majority of targets at favorable illumination angles. Ten spectra can be obtained with a total of 50 days of integration time allocated among the most favorable targets. Note that by the time starshade begins operations this target list is expected to have grown, providing even more flexibility in scheduling. The observing windows for current targets are plotted in Figure 6 showing whether a target is available for observation at any given time during operations. Roman-CGI observations, continued Doppler monitoring, and Gaia astrometry will constrain the brightness and orbital parameters prior to starshade operations, so these observations can be precisely planned for maximum planet visibility with no need for revisits. ### 5.4 Sensitivity to Exozodiacal Dust The dust surrounding Earth-like planets is small in mass but large in area, making it generally much easier to observe that the planet itself. The integrated flux from Solar System’s zodiacal dust, for example, is orders of magnitude brighter than the Earth. Imaging the dust disk distribution requires an integration time of 1 day (on average) or up to 4 days (maximum) to obtain a flux sensitivity of 0.1 zodi, enabling 5$-\sigma$ detection of disks as faint as 0.5 zodi. While disks this faint have never been observed (other than the Solar System, with 1 zodi), the median level inferred from a sample of nearby stars is 4.5 zodis [10], suggesting that most, if not all, of the systems with habitable zone dust will be detected. With a telescope imaging resolution of $0\mbox{$.\\!\\!^{\prime\prime}$}065$, the disks will be mapped at spatial resolutions of $\sim$0.2–0.5 AU. With this sensitivity it may be possible to detect the influence of planets on the zodiacal dust disk structure. 33 have shown that planets with $\gtrsim$4 Earth masses can introduce significant features on the dusk disk brightness distribution of moderately bright dusts disks ($\sim$6–10 zodis). The induced structure would be located in close proximity to the observed planet – with both following the same orbital trajectory – removing any ambiguity in whether the disk structure is planet related. ## 6 Observing Strategy The science objectives are met with an observing program that balances detection and characterization of new exoplanets with the characterization of known giant planets. The observing strategy is guided by the sensitivity toward individual targets (§5) and fundamental limits on the targets’ visibility. ### 6.1 Earth-Like Planets Having already identified the best targets for detection of Earth-like planets for SRP given its constraints(§5.1), we now describe our strategy on how to schedule observations to optimize the number of characterized planets. The 8 most promising stars will be given priority for at least one visit. The revisit strategy for these targets depends on the information gathered during each visit. The decision tree used for observations is illustrated in Figure 11 and described in detail here. The decision tree assumes that the imaging observation is complete with $\sim$1 day of integration and the data is expected to be available for analysis within a couple of days. In the time between the observation and data retrieval the Roman telescope will be available for other observations while the starshade remains in position. The starshade science team will have fast analysis tools in hand to estimate the brightness of the exozodiacal dust disk and detect Earth-like exoplanet candidates. Based on the findings, the starshade team can either decide to initiate the cruise into position for observing the next target in the sequence or to initiate a long integration time observation for spectral characterization. The first visit is a reconnaissance observation. The first check is whether the system has a exozodiacal dust disk above or below 10 zodi. While some of our target stars have existing upper limits on their exozodiacal dust disk brightness, based on precision nulling measurements of their dust’s infrared emission, they are not sufficient to rule out deleterious levels of dust. Note that while we have used a fiducial value of 4.5 zodi in §5 and §7, for the purposes of estimating background, in reality the exozodiacal dust brightness will be vary from target to target and we assume it will be unknown prior to the first observation. Here we are describing the tentative design reference mission for Starshade Rendezvous Probe mission. If the disk is brighter than 10 zodi, the target is removed from the habitability and biosignatures target list since it is not expected that an Earth-like exoplanet could be spectrally characterized against such a bright background as well as an increased risk of false-positive planet detections from planet-induced dust structures [34]. If a target is removed, the observation plan will be updated with the next best target, which may or may not be visited at a later period depending on what is discovered for the target ensemble. If the analysis finds that the exozodiacal dust disk is $\leq$10 zodi but no planet consistent with HZ orbit is found, the target is kept on the list for a revisit as a planet could still appear in subsequent observations. If a planet consistent with a HZ orbit is found, then the imaging data will provide the brightness of the planet, which determines whether a spectral measurement with SNR $>$ 20 is achievable in the remainder of the observing time window, typically 25 days. If that is the case, a spectral observation will be initiated. On the second visit to a target, if no planet consistent with an HZ orbit has been detected, then the target is removed and the observation plan is updated with the next best target in the list. If the planet is detected either for the first or second time, a decision is made based on the data to take a spectroscopic measurement as described above. Spectroscopic measurements are only required to be performed once, so if such a measurement was made in the first visit, then it will not be repeated. On the third visit, if the planet consistent with a HZ orbit has only been detected once, no further visits will be planned since at least 3 observations in 4 planned visits are required. However, there is some flexibility in this decision since the third visit will be made in the second year of observations and the science team will have additional information on the exozodiacal dust disk brightness and population of planet candidates in the ensemble of targets already observed. In the event that there are a small number of target systems left, then this system could have more visits planned. If there are a large number of relevant target systems still available, this revisit priority may fall lower than the priority of these other systems,recognizing revisit priority may evolve as knowledge is gained about each system.Spectroscopic measurements can be triggered based on the criteria discussed above. In the event that the planet has only been observed twice before and/or no spectroscopic measurement has been made yet, a fourth visit will be needed to determining whether the planet’s orbit is in the habitable zone. This will occur during the last $\sim$6 months of the mission so the prioritization of targets could be significantly affected by how many Earth-like exoplanet candidates have been found and their potential for a spectral measurements with SNR$\geq$20. Figure 11: The decision tree for discovery of Earth-like planets adapts its observations as new information is gathered. Depending on 1) the amount of exozodiacal dust, 2) the number of observations of a planet candidate, and 3) whether long spectroscopic integration times are executed, a target may either be revisited or removed from the target list. The 2-year timeline on the bottom of the figure shows the windows available for observations (shown here with tau Ceti’s observing windows). ### 6.2 Known Gas-Giant Planets Unlike the habitability and biosignatures targets, it is expected that significantly more information about the known gas giants will be available. Although we have not assumed prior information on the targets obtained with the CGI in this study, it is possible that Roman-CGI will have already observed these systems and directly imaged the planets before Starshade operations begin. This will determine how bright they are and whether or not spectral measurements with SNR$\geq$15 are viable. The top-ranked targets with their estimated integration time will be integrated into the observation plan. ### 6.3 Exozodiacal Dust No additional observations are required to observe the dusty debris that is prevalent in planetary systems; it will be detected alongside any observed planets. For the aim of characterizing the influence of planets in the dust disk distribution, target stars with bright exozodi (6–-10 zodi) that show clumps with $\geq$10% excess brightness may be revisited up to three more times. Identifying Earth-like planets takes priority, but in the event that exozodiacal dust is too bright in most stars and systems with the characteristics described above exist, these observations will be executed. These observations will track the motion of dust disk clumps to test whether their orbits are Keplerian, indicative of an associated planet. Provided a system within this exozodi range with a $\geq 4M_{\oplus}$ planet is found, this measurement will provide a means to probe planetary systems in stars with high levels of exozodiacal dust in their habitable zones [33]. ### 6.4 Scheduling The visit strategy needs to be dynamic since it will be modified with each observation of habitability targets while at the same time ensuring that a subset of $10$ known gas giants can be spectrally characterized with retargets that optimize the fuel usage. Since there is a significant amount of uncertainty associated with the distribution of exozodiacal dust disks and the frequency of occurrence of Earth-like exoplanets, this requires a fairly sophisticated Monte Carlo simulation that demonstrates that the decision tree is adaptable to the full range of possibilities. This will be the subject of a future study. In the Starshade Probe Study report, the delta-v allocated to retargeting was 1100 m/s [1]. An additional 300 m/s of delta-v is allocated for stationkeeping. The allocation for large slews was estimated to be sufficient for 36 retargeting maneuvers using a limiting scenario where 9 targets were visited 4 times each (Figure 12). For retargeting within other starshade mission concepts see 5, 7. While our objective is to visit 10 habitability and biosignatures targets, some with multiple revisits, and 10 known gas giants only once, the extreme scenario bounds the fuel usage because it assumes the need to retarget in situations that are not necessarily the most fuel efficient. The gas giants, for example, can be visited when they are close to the path needed for other targets resulting in significantly smaller fuel burns. The main driver for delta-v is the need to visit targets 4 times in a period of 2 years. If the mission duration could be made longer, the retargeting strategy could follow the natural right ascension progression of observing windows as the Earth rotates around the Sun, which can result in highly efficient fuel burns. With a two year window, there are cases where fairly aggressive burns are required to catch targets of interest when their observing windows are available. Figure 12: An example of a retargeting strategy that bounds the fuel usage for the Starshade Rendezvous Probe mission. We have allocated 36 target maneuvers to accomplish the objectives. In this case, we have chosen a stressing case where the 36 retarget maneuvers are applied to 9 targets, each visited four times. While this is not representative of the mission, which would visit 10 habitable exoplanet targets with a subset of them requiring revisits in addition to the known gas giants, the reason this is a stressing case is that visiting a target twice a year on different availability windows requires more significant burns than having to visit a target only once a year or just once during the mission as will be that case for the known gas giants and habitability targets with large exozodiacal dust disk backgrounds. The top panel show the main spacecraft events and maneuvers along with the cumulative $\Delta$v estimates for each target as a function of time, labeled in year and month of year at the top of the figure. The vertical magenta tick marks show the time at which each target is visited. The bottom panel shows the targets chosen with their observing availability windows shown as horizontal bars. Each bar has numerical labels representing the order of the visits. The timing of translational retargeting slews (red lines) may take a couple days up to two weeks depending on the angular separation between targets. The targets are arranged vertically in order of right ascension with the observing windows for each target shown as horizontal colored bars. The observation days are chosen to be at the beginning of the window, to allow time for follow-up spectroscopy within the same window. ## 7 Expected Performance The expected scientific yield depends not only on the assumed mission parameters, but also on exoplanet demographics. While the frequency of gas- giant planets is relatively well known, the probability that each of our target stars will have a habitable-zone Earth-size planet ($\eta_{\oplus}$) is not well constrained. NASA’s ExoPAG Study Analysis Group (SAG-13) performed a meta-analysis of several published fits to Kepler survey results, producing a planet frequency formula as a function of planet size and location [32]. Combining this formula with our adopted habitable zone and Earth-like planet radius definitions (Table 4), we calculate an Earth-like planet frequency of $\eta_{\oplus}=0.24^{+0.3}_{-0.1}$. Note that this definition of Earth-like planets and corresponding frequency matches for consistency Ref. 29; for possible alternative calculations of planet occurrence rates, see e.g. Refs. 44, 45. Figure 13: Cumulative completeness for habitability and biosignatures targets (based on Figure 8). The cumulative completeness for habitability and biosignatures targets is shown in Figure 13. The expected number of Earth-like exoplanets is derived from multiplying the completeness by the occurrence rate $\eta_{\oplus}$. For a nominal observing program targeting at least 10 nearby stars the expected number of detected Earth-like exoplanets is $1.5^{+1.9}_{-0.6}$. However, this is only for single visit detection. The number with orbits constrained to the habitable zone is $1.2^{+1.5}_{-0.5}$. Note that the cumulative completeness for spectral measurements flattens after the 5th target. The expected number of Earth-like exoplanets with spectral characterization is $0.65^{+0.82}_{-0.27}$. The expected number with both orbits constrained and a successful spectral measurement is $0.45^{+0.56}_{-0.19}$. This number is lower than the expectation of $\sim$4 derived by Ref. 5 for the same telescope size and starshade launch mass, primarily because Roman’s actual end-to-end efficiency is lower than previously assumed, resulting in significantly longer integration times for high-quality spectra. It is important to note that Roman observations with a starshade will be sensitive to a wide variety of planets. Figure 14 shows the expected number of planets discovered by imaging (SNR$>$7), some of which are bright enough to obtain follow-up spectra. This threshold SNR value is relaxed compared to Earth-like planets because giant exoplanets tend to have more pronounced spectral features. The planet properties and frequency of occurrence are the same as the Exo-S report [46] with the modification that the occurrence rate for warm Earth’s and super-Earths was raised to 0.24. These results indicate that $\sim$12 new planets will be discovered in the nearest sunlike stars providing additional information on their planetary system architectures. Figure 14: Planet yield as a function of planet type and approximate temperature. The yield is obtained based on the single-visit completeness assuming detection with SNR $\geq$ 7\. All observations assume a zodiacal dust disk brightness of 4.5 zodi. The bar chart assumes that the 16 targets for the habitability and biosignature gases investigation are visited at least once. The number of planets scales with the number of targets visited according the the single-visit completeness curve in Figure 13. ## 8 Conclusions We have presented the modeling, observing approach, and expected performance to meet the objectives of the Starshade Rendezvous Probe study [1]. The Starshade Rendezvous Probe concept has the capability to deliver first-of-a- kind exoplanet direct imaging and spectroscopy results in the next decade. A deep-dive investigation will provide the first examination of planetary systems around our nearest sunlike stars, including their habitable zones, giant exoplanets, and warm dust disks, opening a new frontier. The Starshade Rendezvous Probe concept is capable of discovering Earth-size planets in the habitable zones of nearby stars using the relatively moderate aperture Roman space telescope. By initially characterizing the sensitivity to each individual target, we have found that while the SRP has the sensitivity to detect Earth-like exoplanets and constrain their orbits to the habitable zone, its primary limitation is the sensitivity to spectral measurements. The main means for improving this is to increase the aperture of the telescope, as would be done with HabEx, since this has the dual benefit of increasing the photon rate and better resolving the exozodiacal background. It is worth noting there are large uncertainties in the occurrence rates of Earth-like exoplanets and the distribution of zodiacal dust disk brightness, which could result in a increased discovery potential of the SRP if nature behaves favorably. The SRP is the only observatory that would have the capability to detect Earth-like exoplanets within the next decade. Observations of known planets with the SRP could determine whether the atmospheric metallicity and mass of known giant exoplanets follows the correlation observed in our own Solar System, testing whether there is a trend in planetary formation. Meeting these objectives will begin to answer the driving questions of whether Earth is unique and how the Solar System compares to the planetary systems orbiting our nearest sunlike stars. The SRP is well equipped to meet this objective with an expected increase in the number of known gas giants with radial velocity measurements as well as observations with CGI prior to the SRP operations. The SRP will obtain measurements of the exozodiacal dust of nearby sunlike stars with unprecedented sensitivity. This information is key to the future of direct imaging observatories since the dust brightness distribution is not known well enough to pin down the level of background light expected for planet detection and characterization. The sensitivity of the SRP to dust disks provides additional scientific opportunities to investigate the influence of planets on the dust disk morphology provided such systems are found. The main challenge of observing with starshades – retargeting with constrained time windows in a relatively short mission duration – has been addressed with a decision tree that can accommodate the large degree of uncertainty associated with searching for Earth-like exoplanets. The observing plan adapts to new information as the targets are observed multiple times with predetermined criteria for deciding whether to revisit targets or take spectral measurements. We have estimated 36 retargeting maneuvers are necessary to meet the science objectives and we have bounded the amount of delta-v needed with a stress case scenario. The driving use of fuel is the occurrence of large angle retargeting maneuvers needed to observe some targets multiple times within the two-year duration of the mission. Depending on the realization of Earth-like exoplanet occurrence and exozodiacal dust, the SRP mission could have enough fuel for an extended mission to visit more targets. ###### Acknowledgements. Part of this work was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration. ©2020. All rights reserved. This research has made use of 1) the NASA Exoplanet Archive, which is operated by the California Institute of Technology, under contract with the National Aeronautics and Space Administration under the Exoplanet Exploration Program and 2) the SIMBAD database, operated at CDS, Strasbourg, France. ## References * [1] S. Seager, J. Kasdan, A. Romero-Wolf, et al., “Starshade Rendezvous Probe.” https://smd-prod.s3.amazonaws.com/science-red/s3fs-public/atoms/files/Starshade2.pdf (2019). * [2] M. C. Turnbull, T. Glassman, A. Roberge, et al., “The Search for Habitable Worlds. 1. The Viability of a Starshade Mission,” PASP 124, 418–447 (2012). * [3] D. Savransky and D. Garrett, “WFIRST-AFTA coronagraph science yield modeling with EXOSIMS,” Journal of Astronomical Telescopes, Instruments, and Systems 2(1), 1 – 13 (2015). * [4] C. C. Stark, A. Roberge, A. Mandell, et al., “Maximizing the ExoEarth Candidate Yield from a Future Direct Imaging Mission,” ApJ 795, 122 (2014). * [5] C. C. Stark, S. Shaklan, D. Lisman, et al., “Maximized exoEarth candidate yields for starshades,” Journal of Astronomical Telescopes, Instruments, and Systems 2, 041204 (2016). * [6] G. Soto, A. Sinha, D. Savransky, et al., “Starshade orbital maneuver study for WFIRST,” in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series 10400, 104001U (2017). * [7] G. J. Soto, D. Savransky, D. Garrett, et al., “Parameterizing the search space of starshade fuel costs for optimal observation schedules,” Journal of Guidance, Control, and Dynamics 42(12), 2671–2676 (2019). * [8] W. J. Borucki, D. G. Koch, G. Basri, et al., “Characteristics of Planetary Candidates Observed by Kepler. II. Analysis of the First Four Months of Data,” ApJ 736, 19 (2011). * [9] J. N. Winn and D. C. Fabrycky, “The Occurrence and Architecture of Exoplanetary Systems,” ARA&A 53, 409–447 (2015). * [10] S. Ertel, D. Defrère, P. Hinz, et al., “The HOSTS Survey for Exozodiacal Dust: Observational Results from the Complete Survey,” AJ 159, 177 (2020). * [11] H. R. Wakeford, D. K. Sing, T. Kataria, et al., “HAT-P-26b: A Neptune-mass exoplanet with a well-constrained heavy element abundance,” Science 356, 628–631 (2017). * [12] K. L. Cahoy, M. S. Marley, and J. J. Fortney, “Exoplanet Albedo Spectra and Colors as a Function of Planet Phase, Separation, and Metallicity,” ApJ 724, 189–214 (2010). * [13] L. C. Mayorga, J. Jackiewicz, K. Rages, et al., “Jupiter’s Phase Variations from Cassini: A Testbed for Future Direct-imaging Missions,” AJ 152, 209 (2016). * [14] T. D. Robinson, V. S. Meadows, and D. Crisp, “Detecting Oceans on Extrasolar Planets Using the Glint Effect,” ApJ 721, L67–L71 (2010). * [15] T. D. Robinson and C. T. Reinhard, “Earth as an Exoplanet,” arXiv e-prints 1804, arXiv:1804.04138 (2018). * [16] T. D. Robinson, V. S. Meadows, D. Crisp, et al., “Earth as an Extrasolar Planet: Earth Model Validation Using EPOXI Earth Observations,” Astrobiology 11, 393–408 (2011). * [17] C. Leinert, S. Bowyer, L. K. Haikala, et al., “The 1997 reference of diffuse night sky brightness,” A&AS 127, 1–99 (1998). * [18] C. A. Beichman, N. J. Woolf, and C. A. Lindensmith, The Terrestrial Planet Finder (TPF) : a NASA Origins Program to search for habitable planets, National Aeronautics and Space Administration ; Pasadena, Calif. : Jet Propulsion Laboratory, California Institute of Technology (1999). * [19] B. Nemati, “Detector selection for the WFIRST-AFTA coronagraph integral field spectrograph,” in Proc. SPIE, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series 9143, 91430Q (2014). * [20] B. Nemati, J. E. Krist, and B. Mennesson, “Sensitivity of the WFIRST coronagraph performance to key instrument parameters,” in Proc. SPIE, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series 10400, 1040007 (2017). * [21] S. B. Shaklan, L. Marchen, and E. Cady, “Shape accuracy requirements on starshades for large and small apertures,” in Proc. SPIE, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series 10400, 104001T (2017). * [22] E. J. Cady, N. J. Kasdin, R. Vanderbei, et al., “Optimal design of petal-shaped occulters for extra-solar planet detection,” in Techniques and Instrumentation for Detection of Exoplanets III, D. R. Coulter, Ed., Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series 6693, 669304 (2007). * [23] E. Cady, “Nondimensional representations for occulter design and performance evaluation,” in Techniques and Instrumentation for Detection of Exoplanets V, S. Shaklan, Ed., Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series 8151, 815112 (2011). * [24] J. W. Arenberg, T. Glassman, A. S. Lo, et al., “New Worlds Observer system architecture,” in Proc. SPIE, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series 7010, 70101S (2008). * [25] T. Glassman, A. S. Lo, J. Arenberg, et al., “Starshade scaling relations,” in Proc. SPIE, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series 7440, 744013 (2009). * [26] R. Hu, D. Lisman, S. Shaklan, et al., “Overview and Reassessment of Noise Budget of Starshade Exoplanet Imaging.” Submitted to SPIE (2020). * [27] M. C. Turnbull, “ExoCat-1: The Nearby Stellar Systems Catalog for Exoplanet Imaging Missions,” arXiv e-prints 1510, arXiv:1510.01731 (2015). * [28] L. A. Rogers, “Most 1.6 Earth-radius Planets are Not Rocky,” ApJ 801, 41 (2015). * [29] B. S. Gaudi, S. Seager, B. Mennesson, et al., “The Habitable Exoplanet Observatory (HabEx) Mission Concept Study Final Report,” arXiv e-prints , arXiv:2001.06683 (2020). * [30] J. F. Kasting, D. P. Whitmire, and R. T. Reynolds, “Habitable Zones around Main Sequence Stars,” Icarus 101, 108–128 (1993). * [31] J. Chen and D. Kipping, “Probabilistic Forecasting of the Masses and Radii of Other Worlds,” ApJ 834, 17 (2017). * [32] R. Belikov and et al. https://exoplanets.nasa.gov/exep/exopag/sag/#sag13 (2017). * [33] C. C. Stark and M. J. Kuchner, “The Detectability of Exo-Earths and Super-Earths Via Resonant Signatures in Exozodiacal Clouds,” ApJ 686, 637–648 (2008). * [34] D. Defrère, C. Stark, K. Cahoy, et al., “Direct imaging of exoEarths embedded in clumpy debris disks,” in Space Telescopes and Instrumentation 2012: Optical, Infrared, and Millimeter Wave, Proc. SPIE 8442, 84420M (2012). * [35] D. Sirbu, R. Belikov, E. Bendek, et al., “Prospects for exoplanet imaging in multi-star systems with starshades,” in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series 10400, 104001D (2017). * [36] A. Romero-Wolf, G. Bryden, S. Seager, et al., “Starshade Rendezvous: Exoplanet Orbit Constraints from Multi-Epoch Direct Imaging.” Submitted (2020). * [37] R. A. Brown, “Single-Visit Photometric and Obscurational Completeness,” ApJ 624, 1010–1024 (2005). * [38] Y. K. Feng, T. D. Robinson, J. J. Fortney, et al., “Characterizing Earth Analogs in Reflected Light: Atmospheric Retrieval Studies for Future Space Telescopes,” AJ 155, 200 (2018). * [39] F. Feng, M. Tuomi, H. R. A. Jones, et al., “Color Difference Makes a Difference: Four Planet Candidates around $\tau$ Ceti,” AJ 154, 135 (2017). * [40] L. Kreidberg, J. L. Bean, J.-M. Désert, et al., “A Precise Water Abundance Measurement for the Hot Jupiter WASP-43b,” ApJ 793, L27 (2014). * [41] R. E. Lupu, M. S. Marley, N. Lewis, et al., “Developing Atmospheric Retrieval Methods for Direct Imaging Spectroscopy of Gas Giants in Reflected Light. I. Methane Abundances and Basic Cloud Properties,” AJ 152, 217 (2016). * [42] M. Damiano and R. Hu, “ExoReL: A bayesian inverse retrieval framework for exoplanetary reflected light spectra,” The Astronomical Journal 159, 175 (2020). * [43] E. Karkoschka, “Spectrophotometry of the jovian planets and Titan at 300- to 1000-nm wavelength: The methane spectrum,” Icarus 111, 174–192 (1994). * [44] D. C. Hsu, E. B. Ford, D. Ragozzine, et al., “Occurrence Rates of Planets Orbiting FGK Stars: Combining Kepler DR25, Gaia DR2, and Bayesian Inference,” AJ 158, 109 (2019). * [45] S. Bryson, J. Coughlin, N. M. Batalha, et al., “A Probabilistic Approach to Kepler Completeness and Reliability for Exoplanet Occurrence Rates,” AJ 159, 279 (2020). * [46] S. Seager and et al., “Exo-S Report.” https://exoplanets.nasa.gov/exep/studies/probe-scale-stdt/ (2015).
16k
arxiv_papers
2101.01273
We discuss connections between sequential system identification and control for linear time-invariant systems, often termed indirect data-driven control, as well as a contemporary direct data-driven control approach seeking an optimal decision compatible with recorded data assembled in a Hankel matrix and robustified through suitable regularizations. We formulate these two problems in the language of behavioral systems theory and parametric mathematical programs, and we bridge them through a multi-criteria formulation trading off system identification and control objectives. We illustrate our results with two methods from subspace identification and control: namely, subspace predictive control and low-rank approximation which constrain trajectories to be consistent with a non-parametric predictor derived from (respectively, the column span of) a data Hankel matrix. In both cases we conclude that direct and regularized data-driven control can be derived as convex relaxation of the indirect approach, and the regularizations account for an implicit identification step. Our analysis further reveals a novel regularizer and a plausible hypothesis explaining the remarkable empirical performance of direct methods on nonlinear systems. § INTRODUCTION The vast realm of data-driven control methods can be classified into indirect data-driven control approaches consisting of sequential system identification and model-based control as well as direct data-driven control approaches seeking an optimal decision compatible with recorded data. Both approaches have a rich history, and they have received renewed interest cross-fertilized by novel methods and widespread interest in machine learning. Representative recent surveys are [1, 2, 3, 4, 5, 6, 7, 8]. The pros and cons of both paradigms have often been elaborated on. The indirect approach is modular with well understood subtasks, though modeling and identification are cumbersome, their results are often not useful for control (due to, e.g., incompatible uncertainty quantifications), and practitioners often prefer end-to-end methods. Direct approaches promise to resolve these problems by learning control policies directly from data. However, they are often analytically and computationally less tractable and rarely apply to real-time and safety-critical control systems. Selected direct methods that proved themselves in theory and practice are iterative feedback tuning and virtual reference feedback tuning [9, 10, 11]. Quite a few approaches have bridged the direct and indirect data-driven control paradigms. Of relevance to this article, we note the literature on identification for control [7, 12, 13, 14] and control-oriented regularized identification [15], which propose that the control objective should bias the identification task. Likewise, dual control dating to [16] addresses the exploration vs. exploitation trade-offs in simultaneous identification and optimal control; see [17, 18, 19] for recent contributions. Furthermore, [20] formulates data-driven model reference control as an identification problem, where various degrees of prior information can be incorporated so that the method can range between the direct and the indirect approach. We take a similar perspective here: the sequential identification and control tasks can be abstracted as nested bi-level optimization problem: find the best control subject to a model, where the model is the best fit to a data set within some hypothesis class. This approach is modular and both steps admit tractable formulations, but generally it is also suboptimal: there is no separation principle – aside from special cases, see <cit.> – for these two nested optimization problems. An end-to-end direct algorithmic approach may thus outperform indirect methods if a tractable formulation were available. For the latter we resort to a paradigm square in between behavioral system theory and subspace system identification methods. Behavioral system theory [21, 22, 23] takes an abstract view on dynamical systems as sets of trajectories, and it does not require parametric representations which makes it appealing from a data-centric perspective. For example, linear time-invariant (LTI) systems are characterized as shift-invariant subspaces within an ambient space of time series. The role of identification is to find such a low-dimensional feature from data. Subspace methods take a similar (albeit more algorithmic) viewpoint [24, 25, 26] and extract parametric models from the range and null spaces of a low-rank data Hankel matrix. Both lines of work come together in a result known as the Fundamental Lemma [27]; see also [28, 29, 6] for recent extensions. It states that, under some assumptions, the set of all finite-length trajectories (the restricted behavior) of an LTI system equals the range space of a data Hankel matrix. This result serves as the theoretic underpinning for work in subspace identification [29, 30, 31] and data-driven control, in particular subspace predictive control based on non-parametric models [32, 33, 34], explicit feedback policies parametrized by data matrices [35, 36, 37], and data-enabled predictive control (DeePC) seeking compatibility of predicted trajectories with the range space of a data Hankel matrix. The latter methods have first been established for deterministic LTI systems in [38, 39] and have recently been extended by suitably regularizing the optimal control problems. Closed-loop stability was certified in [40]. The regularizations were first mere heuristics [41] but have later been constructively derived by robust control and optimization [42, 43, 44, 45, 46]. These approaches, albeit recent, have proved themselves in practical nonlinear problems in multiple domains [46, 47, 48, 49, 45]. We also note the recent maximum-likelihood perspective [50]. We refer to [6] surveying results surrounding the fundamental lemma. In this paper, we explore the following questions: how does data-enabled predictive control relate to a prior system identification? What are principled regularizations? And why does it work so well in the nonlinear case? We start our investigations from indirect data-driven control formulated as a bi-level optimization problem in the general output feedback setting. As a vehicle to transition between indirect and direct approaches, we consider a multi-criteria problem trading off identification and control objectives reminiscent of similar approaches [7, 12, 13, 14, 15, 16, 17, 18, 19, 20] blending the two. We formally show that one tail of its Pareto front corresponds to the bi-level problem, and a convex relaxation results in the regularized data-enabled predictive control formulations used in [40, 41, 42, 43, 44, 45, 47, 48, 49, 46]. Most of our results are formulated in the abstract language of behavioral systems theory and parametric mathematical programs, but we also specialize our treatment to two concrete methods: subspace predictive control (SPC) [32, 33, 34] and low-rank approximation [39]. In both cases we conclude that the direct regularized data-driven control can be derived as a convex relaxation of the indirect approach, where $(i)$ LTI complexity specifications (selecting the model class) are dropped, and $(ii)$ the projection of the data on the set of LTI systems is replaced by regularizations accounting for implicit identification. In particular, starting from indirect data-driven control based on low-rank approximation of a Hankel matrix, we arrive at a DeePC formulation with an $\ell_{1}$-regularizer (Theorem <ref>). When formulating indirect data-driven control via the SPC framework, our analysis reveals a novel regularizer for DeePC promoting a least-square data fit by projecting on the null space of the Hankel matrix (Theorem <ref>). We illustrate our results with numerical studies illustrating the role of regularization, superiority of the new regularizer, and comparisons. Informed by our analysis, we hypothesize and numerically confirm that the indirect approach is superior in case of “variance” error, e.g., for LTI stochastic systems, and the direct approach wins in terms of “bias” error, e.g., for nonlinear systems supporting the empirical observations in [46, 47, 48, 49, 45]. Similar bias-variance trade-offs can also be found in the recent pre-print [51] discussing sub-optimality of direct and indirect methods as function of the data size. These findings also resonate with those of data-driven model reference control [20] concluding that the direct approach is superior in reducing the bias whereas the indirect one gives better variance – especially if an erroneous model class is selected. The remainder of this paper is organized as follows: Section <ref> reviews representations of LTI systems. Section <ref> formulates the direct and indirect data-driven control problems, and Section <ref> bridges them. Section <ref> contains our numerical studies. Finally, Section <ref> concludes the paper. Readers familiar with the behavioral approach may skip Section <ref>. § LTI SYSTEMS AND THEIR REPRESENTATIONS We adopt a behavioral perspective which allows for system theory independent of parametric representations. We aim at a concise exposition and refer to [21, 22, 23, 6] for details. §.§ Behavioral Perspective on Discrete-Time LTI systems Consider the discrete time axis $\mathbb Z$, the signal space $\real^{q}$, and the associated space of trajectories $\real^{q\mathbb Z}$ consisting of all $q$-variate sequences $(\dots,w(-1),w(0), w(1),\dots)$ with $w(i) \in \real^{q}$. Consider a permutation matrix $P$ partitioning each = P \left[\begin{smallmatrix} u(i) \\ y(i) \end{smallmatrix}\right] where $u(i) \in \real^{m}$ and $y(i) \in \real^{q-m}$ are free and dependent variables that will later serve as inputs and outputs. The behavior $\bv$ is defined as a subset of the space of trajectories, $\bv \subset \real^{q\mathbb Z}$, and a system as the triple $(\mathbb Z,\real^{q},\bv)$. In what follows, we denote a system merely by its behavior $\bv$, keeping the signal space $ \real^{q\mathbb Z}$ fixed throughout. A system is linear if $\bv$ is a subspace of $ \real^{q\mathbb Z}$. Let $\sigma$ denote the shift operator with action $\sigma w({t}) = w({t+1})$. A system is time-invariant if $\bv$ is shift-invariant: $\sigma \bv = \bv$. Finally, $\bv_{L}$ is the restriction of $\bv$ to $ \real^{qL}$, i.e., to trajectories of length $L\in \mathbb Z_{>0}$. §.§ Kernel Representations and Parametric Models Rather than a mere set-theoretic descriptions, one typically works with explicit parametric representations (colloquially termed models) of LTI systems. For instance, a kernel representation with lag $\ell$ specifies an LTI behavior as \begin{equation*} \bv = \text{kernel}(R(\sigma)) = \bigl\{ w \in \real^{q\mathbb Z}\,:\; R(\sigma) w = 0 \bigr\}\,, \end{equation*} where $R(\sigma) = R_{0}+R_{1} \sigma + \dots + R_{\ell} \sigma^{\ell}$ is a polynomial matrix of degree $\ell$, and the matrices $R_{0}, R_{1}, \dots, R_{\ell}$ take values in $\real^{(q-m) \times q}$. Alternatively, one can unfold the kernel representation by revealing a latent variable: the state $x(t) \in \real^{n}$. The input/state/output (or state-space) representation is \begin{align*} \bv = & \bigl\{ w = P \left[\begin{smallmatrix} u \\ y\end{smallmatrix}\right] \in \real^{q\mathbb Z}\,:\; \exists x \in \real^{n\mathbb Z} \mbox{ such that } \\ & \quad \sigma x = Ax + Bu\,,\, y = Cx+Du \bigr\}\,, \end{align*} where $A \in \real^{n \times n}$, $B \in \real^{n \times m}$, $C \in \real^{q-m \times n}$, and $D \in \real^{q-m \times m}$. We assume that the lag $\ell$ (resp., the state dimension $n$) is minimal, i.e., there is no other kernel (resp., state-space) representation with smaller lag (resp., state dimension). The dimension $n$ of a minimal state-space representation manifests itself in a minimal kernel representation as $n=\sum_{i=1}^{q-m} \ell_{i}$, where $\ell_{i}$ is the degree of the $i$th row of $R(\sigma)$. §.§ Representation-Free Estimation and Behavior Dimension Given a state-space representation with $m$ inputs, order $n$, and lag $\ell$, the extended observability and convolution matrices \begin{equation*} \obs_{L} \left[\begin{smallmatrix} C \\ CA \\ \vdots \\ C A^{L-1} \end{smallmatrix}\right] \quad\mbox{and}\quad \conv_{L} \left[\begin{smallmatrix} D & 0 & \cdots & & 0 \\ CB & D & 0 & \cdots & 0 \\ CAB & CB & D & \ddots & \vdots \\ \vdots & \ddots & \ddots &\ddots & 0 \\ CA^{L-2}B & \cdots & CAB & CB & D \end{smallmatrix}\right] \end{equation*} parametrize all length-$L$ trajectories in $\bv_{L}$ as \begin{equation} %w = \Pi \left[\begin{smallmatrix} u \\ y\end{smallmatrix}\right] \begin{bmatrix} u \\ y \end{bmatrix} \begin{bmatrix} I & 0 \\ \conv_{L} & \obs_{L} \end{bmatrix} \begin{bmatrix} u \\ \alphani \end{bmatrix} \,, \label{eq: IOS representation} \end{equation} where $\alphani \in \real^{n} $ is the initial state. Recall the observability problem: given length-$L$ time series of inputs and outputs, can $\alphani$ be reconstructed? Equation (<ref>) gives a succinct answer: namely, $\alphani$ can be reconstructed if and only if $\obs_{L}$ has full column-rank. The minimum $L$ so that $\obs_{L}$ has full rank $n$ equals the lag $\ell$ of a minimal kernel representation. As readily deducible from (<ref>) and formalized in <cit.>, in a representation-free setting, the initial condition $\alphani$ for a trajectory $w \in \bv_{L}$, can be estimated via a prefix trajectory $\wini = \bigl(w(-\Tini+1), \dots, w(-1), w(0) \bigr)$ of length $\Tini \geq \ell$ so that the concatenation $\wini \wedge w \in \bv_{\Tini+L}$ is a valid trajectory. Hence, an LTI system is characterized by the complexity parameters $( q,m,n,\ell)$, and we denote the corresponding class of LTI systems by $\lti_{m,\ell}^{q,n}$: namely, LTI systems with $m$ inputs, $q-m$ outputs, minimal state dimension $n$, and minimal lag $\ell$. The following lemma characterizes the dimension of $\bv_{L} \in \lti_{m,\ell}^{q,n}$ in terms of the complexity parameters $( q,m,n,\ell)$. Let $\bv \in \lti_{m,\ell}^{q,n}$. Then $\bv_{L}$ is a subspace of $ \real^{qL}$, and for $L \geq \ell$ its dimension is $mL + n$. Due to linearity of $\bv$, $\bv_{L} \subset \real^{qL}$ is a subspace. To show that the dimension of $\bv_{L}$ equals $mL + n$ for $L \geq \ell$, we appeal to a minimal state-space representation of $\bv$ — a state-space-independent proof is in <cit.>. We have $w = P \left[\begin{smallmatrix} u \\ y\end{smallmatrix}\right] \in \bv_{L}$ if and only if (<ref>) holds for some $\alphani \in \real^{n}$. Since the representation is minimal, $\obs_{L} \in \real^{(q-m)L \times n}$ is of full column-rank for $L \geq \ell$. Therefore, the matrix $ \left[\begin{smallmatrix} I & 0 \\ \conv_{L} & \obs_{L} \end{smallmatrix}\right] \in \real^{qL \times (mL + n)}$ is of full rank $mL+n$ for $L \geq \ell$ and forms a basis for $\bv_{L}$. Thus, $\bv_{L}$ has dimension $mL+n$. All forthcoming results assume known complexity $(q,m,n,\ell)$. When only data and no prior information is available, it is reasonable to assume upper bounds on $(q,m,n,\ell)$. In this case, the anticipated dimension of $\bv_{L}$ is at most $mL + n$, and the forthcoming rank equalities the behavior dimension should be replaced by inequalities. §.§ Image Representation of Restricted Behavior The restricted behavior $\bv_{L}$, the set of all trajectories of length $L$, can be described by a kernel or state-space representation. As an interesting alternative, we recall the image representation of $\bv_{L}$ by a data matrix of a time series. Consider the sequence $w = \bigl(w(1),w(2),\dots,w(T)\bigr)$ with elements $w(i) \in \real^{q}$, and define the (block) Hankel matrix $\Han_{L}(w) \in \real^{qL \times (T-L+1)}$ of depth $L$, for some $L \leq T$, as \begin{equation*} \Han_{L}(w) = \begin{bmatrix} w(1) & w(2) & \dots & w(T-L+1) \\ w(2) & w(3) & \dots & w(T-L+2) \\ \vdots & \vdots & \ddots & \vdots \\ w(L) & w(L+1) & \dots & w(T) \end{bmatrix} %\in \real^{Lq \times (T-L+1)} \,. \end{equation*} A result due to [27] that became known as the Fundamental Lemma offers an image representation of the restricted behavior in terms of the column span of a data Hankel matrix. We present a necessary and sufficient version here assuming: * rank$\left(\Han_L(w)\right)=mL + n$. Consider an LTI system $\bv \in \lti_{m,\ell}^{q,n}$ and an associated trajectory $w = \bigl(w(1), w(2),$ $ \dots, w(T)\bigr) \in \real^{qT}$. The following are equivalent for $L > \ell$: \begin{equation*} \text{colspan} \left(\Han_{L}(w) \right) = \bv_{L} \quad\Longleftrightarrow\quad \text{Assumption \ref{ass:L+n pe}} \end{equation*} In words, the Hankel matrix $\Han_L(w)$ composed of a single $T$-length trajectory parametrizes all $L$-length trajectories if and only if rank$\left(\Han_L(w)\right)=mL + n$. A plausible reasoning leading up to Lemma <ref> is that every column of $\Han_L(w)$ is a trajectory of length $L$, and the set of all such trajectories has at most dimension $mL+n$; see Lemma <ref>. Lemma <ref> extends the original Fundamental Lemma <cit.> which requires input/output partitioning, controllability, and persistency of excitation of order $L+n$ (i.e., $\Han_{L+n}(u)$ must have full row rank) as sufficient conditions. Lemma <ref> also extends to mosaic Hankel, Page, and trajectory matrices <cit.>. It is debatable whether the image representation via the Hankel matrix $\Han_{L}(w)$ should be called a “model”, as it is readily available from raw data. Hence, we call $\text{colspan} \left(\Han_{L}(w) \right)$ a data-driven representation of $\bv_{L}$ and reserve the term “model” for parametric (kernel or state-space) representations. Models are useful for many reasons: first and foremost the availability of powerful analysis and design methods. Another readily discernible advantage is that models are vastly compressed compared to the image representation, and the latter holds only on finite horizons unless trajectories are weaved together [31]; see also Remark <ref>. § DIRECT AND INDIRECT DATA-DRIVEN CONTROL We present different data-driven control formulations along with assumptions under which the formulations are consistent. These assumptions are used only for consistency statements and not for our main results, but they will prove insightful. §.§ Optimal Control Problem Given a plant with plant behavior $\bv^P \in \lti_{m,\ell}^{q,n}$, a $\Tini$-length prefix trajectory $\wini = \bigl(w(-\Tini+1), \dots, w(0) \bigr) \in \bv_{\Tini}$, a $\Tr$-length reference trajectory $w_r\in\real^{q\Tr}$ in a reference behavior $\bv^R$, and a set of admissible trajectories $\mathcal W \subset \real^{q\Tr}$, consider the finite-time optimal control problem w ∈𝒲(w-w_r) C: ∧w ∈^P_+ For $\Tini \geq \ell$ the prefix trajectory $\wini$ implicitly sets the initial condition for the optimal control problem (<ref>); see Section <ref>. In case of uncertain initial condition, the prefix $\wini$ can be made a decision variable and included via a penalty term in the cost; c.f., [40, 39, 41, 43, 42]. We refrain from such extensions here. Typically, the cost $\ctr:\, \mathbb R^{q\Tr} \to \real_{\geq 0}$ includes a running and a terminal cost. The set $\mathcal W \subset \real^{q\Tr}$ captures constraints on admissible trajectories (e.g., capturing input saturation). We denote a minimizer (if it exists) of the optimization problem $\boldsymbol{C}$ in (<ref>) by $w^{\star}_{C}$. We make the following regularity assumptions: * $\ctr:\, \mathbb R^{q\Tr} \to \real_{\geq 0}$ is a convex function that achieves its minimum when $w = w_{r}$; $\mathcal W \subset \real^{q\Tr}$ is closed, convex, and non-empty; and $(\real^{q\Tini} \oplus \mathcal W) \cap \bv^{P}_{\Tini+\Tr}$ is non-empty. The last assumption ensures that $\mathcal W$ is viable, i.e., a trajectory of $\bv^{P}$ originating anywhere can be contained within $\mathcal W$ for $L$ steps. Problem (<ref>) is thus convex with closed, convex, and non-empty feasible set due to Assumption <ref> and because ${\bv^{P}_{\Tini+\Tr}}$ is a subspace; see Lemma <ref>. Under further standard assumptions existence and uniqueness of a (global) minimum can be assured, but we do not impose further structure. For problem (<ref>), we do not necessarily assume $\bv^P = \bv^R$, since we often ask systems to track non-plant behavior (e.g., steps). Likewise, we generally do not assume feasibility: $w_{r} \in \mathcal W$. However, such assumptions connect to model reference control and allow to state consistency results as presented next. * $\wini \wedge w_{r} \in {(\real^{q\Tini} \oplus \mathcal W)} \cap \bv^{P}_{\Tini+\Tr}$, i.e., the reference $w_{r} \in \bv^R_{\Tr}$ is compatible with the prefix trajectory $\wini$, the plant $\bv^{P}$, and the constraints $\mathcal W$. Under Assumptions <ref> and <ref>, the minimum of the control problem $\boldsymbol{C}$ in (<ref>) is achieved for $w^{\star}_{C}=w_{r}$. Fact <ref> (and similar consistency results later) follows since $w^{\star}_{C}=w_{r}$ is feasible and achieves the minimum of the cost. Fact <ref> (and consistency Assumption <ref>) serve to establish ground-truth for comparing different problem formulations. Problem (<ref>) becomes a “classical” control problem if a parametric model for the plant $\bv^P$ is available. The latter is usually obtained from data through system identification. §.§ Indirect Data-Driven Control via System Identification Given a $\Td$-length trajectory $w_{d} \in\real^{q\Td}$ as identification data, conventional system identification and control consists of three steps. The first step, model class selection, amounts to choosing the set of candidate models, e.g., $\lti_{m,\ell}^{q,n}$ specified by the complexity $(q,n,m,\ell)$. The second step, model fitting, chooses an element from the model class that fits the data best in some specified sense, e.g., distance between data $w_{d}$ and model $\bv$. This step is often synonymous to learning a parametric model (e.g., PEM), though some classic (e.g., ETFE) and modern (e.g., kernel-based) methods are non-parametric and by-pass the model order selection; see [2] for a review (and the acronyms). However, for control design the non-parametric models again have to be projected on a behavior in $\lti_{m,\ell}^{q,n}$. Both approaches can be abstracted as ŵ_d,(ŵ_d-w_d) ID: ŵ_d ∈_ , ∈_m,ℓ^q,n . It is useful to think of the identification loss $\cid: \mathbb R^{q\Td} \!\to\! \real_{\geq 0}$ as a distance. Given the data $w_{d}$, problem (<ref>) seeks the closest LTI behavior within the class $\lti_{m,\ell}^{q,n}$, i.e., the closest subspace with dimensions as in Lemma <ref>. We denote a minimizer of (<ref>) by $\bigl(\hat w^{\star}_{d,ID},\widehat\bv_{ID}^{\star}\bigr)$ and assume the following about * $\cid(\cdot)$ achieves its minimum when $\hat w_{d} \!=\! w_{d}$. Note that existence and uniqueness of minimizers of (<ref>) does not only hinge upon the regularity of cost and constraint functions, but also on the data. In general, identification problems are non-convex. For now we keep problem (<ref>) abstract and general and resort to more specific formulations in Section <ref>. Exact identification of the true system requires exact data $w_{d} \in \bv^P_{\Td}$ and an identifiability assumption <cit.> which assures that $\bv^P$ can be recovered from $w_{d}$: * $w_{d} \in \bv^P_{\Td}$, i.e., $w_{d}$ is a valid trajectory of $\bv^P_{\Td}$ ; and * rank$\left(\Han_{\ell+1}(w_{d})\right)=m(\ell+1) + n$. Under assumptions <ref>–<ref>, the minimum value of the system identification problem $\boldsymbol{ID}$ in (<ref>) is achieved for $\hat w_{d,ID}^{\star}=w_{d}$ and $\widehat\bv^{\star}_{ID} = \bv^P$. We again note that the (arguably strong) Assumptions <ref>, <ref>, and <ref> are used only for consistency statements (such as Fact <ref>) and not for our later main results and simulations. Finally, equipped with an identified behavior $\widehat\bv^{\star} \in \lti_{m,\ell}^{q,n}$, the third step is certainty-equivalence control: solve the optimal control problem (<ref>) subject to the identified model: w ∈𝒲( w-w_r) ∧w ∈^⋆_+ In (<ref>), $\ctr( w-w_{r})$ is merely a surrogate (predicted) control error since $ w \in \widehat\bv^{\star}$, the identified model, rather than $ w \in \bv^{P}$. Putting both the system identification (<ref>) and certainty-equivalence control (<ref>) together, we arrive at indirect data-driven control formulated as the bi-level problem w ∈𝒲(w-w_r) BL: ∧w ∈^⋆_+ ^⋆ ∈_ŵ_d, (ŵ_d-w_d) ŵ_d ∈ _ , The bi-level problem structure in (<ref>) reflects the sequential system identification and control tasks, that is, first a model is fitted to the data in the inner identification problem before the model is used for control in the outer problem. We denote a minimizer for the inner problem of (<ref>) by $\bigl(\hat w^{\star}_{d,BL},\widehat \bv^{\star}_{BL}\bigr)$ and a minimizer for the outer problem of (<ref>) by $w^{\star}_{BL}$. The bi-level formulation (<ref>) is only the tip of the iceberg, and the overall design may feature further nested levels, e.g., optimization of the model selection hyper-parameters $(n,\ell)$, uncertainty quantification, etc. We deliberately neglect these levels here and focus on identification and control. Since our ultimate interest is control, we treat models in a disregarding manner, i.e., they serve merely an auxiliary purpose. Of course, models are desired for other reasons: system design, analysis, the reasons in Remark <ref>, etc. Under suitable consistency assumptions, the sequential system identification and control approach in (<ref>) is optimal. Consider the optimal control problem $\boldsymbol{C}$ in (<ref>) and the bi-level problem $\boldsymbol{BL}$ in (<ref>). Then * under Assumptions <ref>–<ref>, the bi-level problem $\boldsymbol{BL}$ reduces to the optimal control $\boldsymbol{C}$; and * under the additional Assumptions <ref> and <ref>, the minimum value of the bi-level problem $\boldsymbol{BL}$ is achieved for $\hat w_{d,BL}^{\star}=w_{d}$, and $\widehat\bv^{\star}_{BL} = \bv^P$, $w^{\star}_{BL}= w_{r}$. The first statement echos the “model as well as possible” paradigm and a separation of control and identification, albeit in a simple setting; see <cit.> for further reading. §.§ Direct Data-Driven Control via the Image Representation The direct data-driven control approach pursued here hinges upon the Fundamental Lemma <ref>. A direct corollary of the latter is that the prediction and estimation trajectories have to be within the column span of the data Hankel matrix. Assume that Assumptions <ref> and <ref> hold with $L$ replaced by $\Tini+L$, then the optimal control problem $\boldsymbol{C}$ in (<ref>) is equivalent to w ∈𝒲(w-w_r) D: ∈ colspan (_+(w_d) ), i.e., the minimizers and minima of (<ref>) and (<ref>) coincide. Under Assumptions <ref> with $L$ replaced by $\Tini+L$, <ref>, <ref>, and <ref>, the minimum value of (<ref>) is achieved for $w^{\star}_{D} = w_{r}$. It is instructive to compare the sample complexity of direct and indirect approaches (<ref>) and (<ref>). Due to Assumption <ref>, (<ref>) requires more data than the identification Assumption <ref>. This discrepancy is due to (<ref>) seeking a multi-step predictor, whereas identification (<ref>) seeks a single-step predictor to be applied recursively. By weaving multiple trajectories of length $\ell+1$, Assumption <ref> can be eased so that the data lengths coincide; see <cit.>. In comparison, with system identification, the model order selection is implicit in Assumption <ref> and encoded in the rank of the Hankel matrix $\Han_{\Tini+\Tr}(w_{d})$ – at least, for exact data $w_{d} \in \bv^{P}_{\Td}$. If the data $w_{d}$ is noisy, then $\Han_{\Tini+\Tr}(w_{d})$ likely has full rank, and the constraint of (<ref>) is vacuous. Thus, $w=w_{r}$ uniquely minimizes the surrogate control error, but the realized control error may be arbitrarily different. In short, certainty-equivalence can fail arbitrarily poorly in direct data-driven control, and the direct approach has to be robustified. This is a major difference with the indirect (first identify, then control) approach (<ref>): one purpose of identification is to filter noisy data by projecting on a deterministic behavior. To go beyond certainty equivalence, the DeePC approaches [40, 42, 43, 44, 41, 45, 46] reformulate the constraint in (<ref>) as $\text{col}(\wini,w) = \Han_{\Tini+\Tr}(w_{d}) g$ for some $g$ and add a robustifying regularizer. w ∈𝒲,g(w-w_r) + λ·h(g)D_λ: w = _+(w_d) g To provide an intuition, every column of $\Han_{\Tini+\Tr}(w_{d})$ is a trajectory of $\bv^{P}_{\Tini+\Tr}$, and the decision variable $g$ linearly combines these columns for the optimal trajectory $w$ – consistent with the prefix trajectory $\wini$ and regularized by $h(g)$. The regularization function $h(\cdot)$ and parameter $\lambda$ are nonnegative. Choices for $h(\cdot)$ are one-norms [41], two-norms [44], squared two-norms [40, 45], or arbitrary $p$-norms [42, 43, 46]. The regularizers can be related to robust optimization formulations in deterministic [44, 45, 46] or stochastic settings [42, 43], where $\lambda$ is a design parameter specifying the size of the assumed uncertainty set. The regularized formulation (<ref>) has proved itself in practical (nonlinear) control systems [46, 47, 48, 49, 45]. § BRIDGING DIRECT & INDIRECT APPROACHES §.§ Multi-Objective Data-Driven Control From an optimization perspective it is natural to lift the bi-level problem (<ref>) to a multi-criteria problem simultaneously optimizing for identification and control objectives. Using weighted sum scalarization, the multi-criteria problem is w ∈𝒲,ŵ_d,γ·(ŵ_d-w_d) + (w-w_r) MC_γ: ∧w ∈_+ , ŵ_d ∈_ , where the trade-off parameter $\gamma \geq 0$ traces the Pareto front between the identification and optimal control objectives. The multi-criteria problem (<ref>) can be interpreted as fitting a model $\widehat\bv$ simultaneously to two data sets: the identification data $w_{d}$ and the reference $w_{r}$. From a control perspective, the identification criterion biases the solution $w \in \widehat\bv$ to adhere to the observed data $w_{d}$ rather than merely matching the to be tracked reference $w_{r}$. Likewise, from the other side, the identification criterion is biased by the control objective. In short, control and identification regularize each other, in the spirit of identification for control [7, 12, 13, 14]. A similar formulation has been proposed in [15] interpolating between PEM identification and a model-reference control objective. Likewise, the data-driven model reference control formulation in [20] interpolates between a direct and an indirect approach. Finally, dual control approaches consider similar multi-criteria formulations balancing exploration (for identification) and exploitation (i.e., optimal control) [16, 17, 18, 19]. We denote a minimizer of (<ref>) by $\bigl(w^{\star}_{MC},\hat w^{\star}_{d,MC},\widehat \bv^{\star}_{MC}\bigr)$. Under Assumptions <ref>–<ref>, for any $\gamma \geq 0$ the minimum of the parametric multi-criteria problem $\boldsymbol{MC}_{\gamma}$ is achieved for $\hat w_{d,MC}^{\star}=w_{d}$, $\widehat\bv^{\star}_{MC} = \bv^P$, and $w^{\star}_{MC} = w_{r}$. Different points on the Pareto front of (<ref>) have different emphasis regarding the control and identification objectives. Below we formalize that for $\gamma$ sufficiently large, the multi-criteria problem (<ref>) recovers the bi-level problem (<ref>) corresponding to sequential system identification and control. We follow standard penalty arguments from bi-level optimization [52, 53], which are particularly tractable here since (<ref>) is only weakly coupled: the inner problem does not depend on the decision variable $w$ of the outer problem. Assume there is a minimum (termed value function) of the inner problem: ŵ_d, (ŵ_d-w_d) φ = ŵ_d ∈ _ , ∈_m,ℓ^q,n . The bi-level problem (<ref>) reads then equivalently as ∧w ∈_+ , ŵ_d ∈_ , ∈_m,ℓ^q,n , (ŵ_d-w_d) - φ= 0 . At this point the reader is encouraged to review the definition and salient properties of a constraint qualification termed partial calmness [52, 53]; see the appendix. If problem (<ref>) is partially calm at a local minimizer and $\ctr(\cdot)$ is continuous, then there is $\gamma^{\star}>0$ so that, for all $\gamma > \gamma^{\star}$, then (<ref>) equals w∈𝒲,ŵ_d,γ·| (ŵ_d-w_d) - φ| + (w-w_r) ∧w ∈_+ , ŵ_d ∈_ , ∈_m,ℓ^q,n , that is, the local minimizers of (<ref>) and (<ref>) coincide; see Proposition <ref>. We now drop the absolute value (since $ \cid(\hat w_{d}-w_{d}) - \varphi \geq 0$) and the constant $\varphi$ (which in our case does not depend on the variable $w$ of the outer problem) from the objective of (<ref>) to recover problem (<ref>). We have thus established a chain of equivalences relating the bi-level and multi-criteria problems. We summarize our discussion below. Consider the parametric multi-criteria problem $\boldsymbol{MC}_{\gamma}$ in (<ref>) and the bi-level problem $\boldsymbol{BL}$ in (<ref>). Assume that the inner identification problem admits a minimum as in (<ref>), (<ref>) is partially calm at any local minimizer, and $\ctr(\cdot)$ is continuous. Then there is $\gamma^{\star}>0$ so that for $\gamma > \gamma^{\star}$ the problem $\boldsymbol{MC}_{\gamma}$ is equivalent to $\boldsymbol{BL}$, i.e., $ w^{\star}_{MC} = w^{\star}_{BL}$ , $\hat w^{\star}_{d,MC} = \hat w^{\star}_{d,BL}$, and $\widehat \bv^{\star}_{MC} = \widehat \bv^{\star}_{BL}$. Moreover, the optimal values of $\boldsymbol{MC}_{\gamma}$ and $\boldsymbol{BL}$ coincide up to the constant $\gamma \cdot \varphi$ with $\varphi$ defined in (<ref>). The following comments are in order regarding partial calmness. As discussed in Proposition <ref>, partial calmness is equivalent to the constraint $ \cid(\hat w_{d}-w_{d}) - \varphi \geq 0$ serving as an exact penalty. Partial calmness is satisfied, for instance, appealing to Proposition <ref>, if the identification cost $\cid(\cdot)$ can be phrased as a distance (see the discussion following the identification problem (<ref>)) and $\ctr(\cdot)$ is Lipschitz continuous over the feasible set, e.g., the feasible set is either compact (due to constraints) or the control performance is measured by a norm or Huber loss. The Lipschitz constant then serves as a lower estimate for $\gamma^{\star}$. A non-Lipschitz cost requires $\gamma \to \infty$ as a sufficient condition. Note that for $\gamma \to \infty$ Proposition <ref> holds without assumptions, since (<ref>) is merely an indicator function reformulation of (<ref>). Our relaxations in the next sections will, among others, drop the requirement on $\gamma$ sufficiently large as well as the LTI complexity specification $\widehat\bv \in \lti_{m,\ell}^{q,n}$. Even if the identification (<ref>) is convex the multi-criteria problem (<ref>) is not, since it simultaneously optimizes over the to-be-identified model $\bv$ and the to-be-designed trajectory $w$. This can be spotted in a kernel representation: the constraint $ w \in \widehat\bv_{\Tr}$ takes the form $\widehat R(\sigma) w = 0$, where both $\widehat R$ and $ w$ are variables. Other representations lead to the same conclusions. Consider the multi-criteria problem (<ref>) and a kernel representation of the to-be-identified behavior: $\widehat\bv = \text{kernel}(\widehat R(\sigma))$. Then the feasible set of (<ref>) is not convex. We believe that the multi-criteria problem is interesting in its own right: studying its Pareto front and choosing an optimal trade-off parameter may possibly yield superior performance. Our problem setup thus far was conceptual rather than practically useful. Below, we consider concrete problem formulations and turn our conceptual insights into concise results. §.§ Bridging Towards Subspace Predictive Control (SPC) We explain SPC from the perspective of the Fundamental Lemma <ref> stating that any trajectory $\wini \wedge w \in \bv_{\Tini+\Tr}^{P}$ lies in $\text{colspan} \left(\Han_{\Tini+\Tr}(w_{d}) \right)$. Recall that $\wini$ is a prefix trajectory of length $\Tini \geq \ell$ setting the initial condition, and $w$ is a future trajectory of length $\Tr>1$ to be designed via optimal control. Accordingly, permute and partition $w$ and the Hankel matrix \begin{equation*} \begin{bmatrix} \wini \\ w \end{bmatrix} \sim \begin{bmatrix} \uini \\ u \\ \hline \yini \\ y \end{bmatrix} %w_{d} = \begin{bmatrix} %u_\text{ini}^{d} \\ y_\text{ini}^{d} \\ u^{d} \\ y^{d} \Han_{\Tini+L}(w_{d}) \sim \begin{bmatrix} \Up \\ \Uf \\\hline \Yp \\ \Yf \end{bmatrix} \begin{bmatrix} \Han_{\Tini + \Tr}(u_{d}) \\\hline \Han_{\Tini + \Tr}(y_{d}) \end{bmatrix} \,, \end{equation*} where $\uini \in \real^{m\Tini}$, $\yini \in \real^{(q-m)\Tini}$, and $\sim$ denotes similarity under a coordinate permutation. The subscripts “p” and “f” are synonymous to “past” and “future”. We seek a linear model, i.e., a matrix $K$, relating past and future as \begin{equation} y = \underbrace{ \begin{bmatrix} K_{p} & \vline& K_{f} \end{bmatrix}}_{=K} \cdot \begin{bmatrix} \uini \\ \yini \\\hline u \end{bmatrix} \label{eq: ARX transition model} \,. \end{equation} The multi-step predictor $K$ is found from Hankel matrix data by means of the least-square criterion <cit.> K - K · where $\|\cdot\|_{F}$ is the Frobenius norm. Via the Moore-Penrose inverse, the solution of (<ref>) is the classic SPC predictor [32] \begin{equation} K = \Yf \begin{bmatrix} \Up \\ \Yp \\\Uf \end{bmatrix}^{\dagger} \,. \label{eq: SPC predictor} \end{equation} It is insightful to compare equation (<ref>) and the matrices $K_{p},K_{f}$ to equation (<ref>) and the extended observability and impulse response matrices $\obs_{L}$ and $\conv_{L}$, respectively. One realizes that for exact data, (<ref>) is an ARX model with rank$(K_{p})=n$ assuring LTI behavior of desired complexity and a lower block-triangular zero pattern of $K_{f}$ assuring causality. For inexact data, LTI behavior of desired complexity is promoted by low-rank approximation (typically via singular-value thresholding of $K_{p}$) [32]; and one aims to gain causality by heuristically thresholding $K_{f}$ towards a desired zero pattern <cit.>, <cit.>. The causality requirement can also be omitted for offline or receding horizon control, but it is useful to condition the data on the set of causal models. These steps bring the linear relation (<ref>) half-way towards an LTI model. Though a model has further structure, e.g., $K_{f}$ is Toeplitz, and the entries of $K_{p}$ and $K_{f}$ are coupled; see (<ref>). Hence, in this case the identification problem (<ref>) is relaxed to the single, monolithic, and non-convex program K - K · K = K_p K_f K_f lower-block triangular rank( K_p) = n where the lower-block triangular specification means that all entries above the diagonal $(q-m)\times m$ blocks equal zero. We obtain a parametric version of the indirect data-driven approach (<ref>), where $\wini \wedge w \in \widehat\bv^{\star}_{\Tini+\Tr}$ and $w \in \mathcal W = \mathcal U \times \mathcal Y$ are replaced by (<ref>) and $(u,y) \in \mathcal U \times \mathcal Y$, respectively: u ∈𝒰,y ∈𝒴( [ y - y_r u - u_r ] ) y = K^⋆ · K^⋆ ∈_K - K · K = K_p K_f K_f lower-block triangular rank( K_p) = n We stress that (<ref>) is generally not an equivalent reformulation of (<ref>) since the inner identification does not necessarily lead to an LTI model; see the comments following equation (<ref>). For comparison, consider also an instance of the direct regularized problem (<ref>) with regularizer $h(g) = \|(I-\Pi)g\|_{p}$: u ∈𝒰,y ∈𝒴,g( [ y - y_r u - u_r ] ) + λ·(I-Π)g_p g = Here, $\|\cdot\|_{p}$ is any $p$-norm, $\Pi = \left[\begin{smallmatrix} \Up \\ \Yp \\\Uf \end{smallmatrix}\right]^{\dagger}\! \left[\begin{smallmatrix} \Up \\ \Yp \\\Uf \end{smallmatrix}\right]$, and $(I-\Pi)$ is an orthogonal projector on the kernel of the first three block-constraint equations. The proof of Theorem <ref> will later show that this regularizer is in fact induced by the least-square identification (<ref>), i.e., $\|(I-\Pi)g\|_{p}=0$ if and only if the least-square criterion is minimized. Hence, it robustifies the problem akin to least squares. We state the following consistency result. Under Assumptions <ref> with $L$ replaced by $\Tini+L$, <ref>, <ref>, and <ref>, for any $\lambda \geq 0$ the minimum of the regularized problem (<ref>) is achieved for $y^{\star} = Y_{f}g^{\star} = y_{r}$ and $u^{\star} = U_{f}g^{\star} = u_{r}$, where $\|(I-\Pi)g^{\star}\|_{p}=0$. Fact <ref> may not appear insightful at first glance, but it highlights an important fact. The projection-based regularizer $h(g)= \|(I-\Pi)g\|_{p}$ is consistent since it penalizes only the homogenous solution to the constraint equations (<ref>) and does not affect the variables $(u,y)$. In comparison, the conventional norm-based regularizer $h(g) = \|g\|_{p}$ is not consistent: it penalizes the heterogeneous solution of the constraint equations in (<ref>) and thus also $(u,y)$. Hence, even with ideal consistency Assumptions <ref>, <ref>, <ref>, and <ref> in place, the norm-based regularizer $h(g) = \|g\|_{p}$ with $\lambda \neq 0$ does not lead to the ground-truth solution $y^{\star} = Y_{f}g^{\star} = y_{r}$, $u^{\star} = U_{f}g^{\star} = u_{r}$; see also Remark <ref>. The following is the main result of this subsection. Consider the indirect data-driven control problem (<ref>) and the direct data-driven control problem (<ref>) parameterized by $\lambda \geq 0$. Let Assumption <ref> hold and assume that $\ctr(\cdot)$ is Lipschitz continuous. For $\lambda$ sufficiently small, (<ref>) is a convex relaxation of (<ref>), that is, $(i)$ (<ref>) is convex, $(ii)$ any feasible $(u,y)$ in (<ref>) is feasible for (<ref>), and $(iii)$ the optimal value of (<ref>) lower-bounds that of (<ref>). First, we perform a convex relaxation by dropping the rank and block-triangularity constraints in (<ref>). Second, observe that the explicit solution of the inner problem, the predictor (<ref>), is equivalently derived as least-norm solution \begin{align*} y = \Yf g^{\star} \;\text{ where}\; &g^{\star} = \argmin_{g} \| g\|_{2} \nonumber \\&\text{subject to} \begin{bmatrix} \Up \\ \Yp \\\Uf \end{bmatrix} g = \begin{bmatrix} \uini \\ \yini \\ u \end{bmatrix} \,. %\label{eq:least-norm form} \end{align*} u ∈𝒰,y ∈𝒴( [ y - y_r u - u_r ] ) y = g^⋆ g^⋆ ∈_ g g_2 g = We now follow the arguments from Section <ref> to reduce the bi-level problem (<ref>) to a single-level multi-criteria problem. As in (<ref>), the inner problem can be replaced by a constraint assuring that it achieves its minimum. Here, we add an orthogonality constraint to the constraints of the inner problem: \begin{equation*} %0 = g-g^{\star} = g - \Pi g \begin{bmatrix} \Up \\ \Yp \\\Uf \end{bmatrix} g = \begin{bmatrix} \uini \\ \yini \\ u \end{bmatrix} \quad\text{and}\quad 0 = \| (I - \Pi) g \|_{p} \end{equation*} The orthogonality constraint $0 = \| (I - \Pi) g \|_{p}$ poses the inner optimality constraint as the distance to the subspace containing the minimizers of the inner problem. Retaining all constraints, (<ref>) can then be formulated as the single-level problem u ∈𝒰,y ∈𝒴,g( [ y - y_r u - u_r ] ) g = (I - Π) g _p = 0 We now apply Proposition <ref>, lift the distance constraint $\| (I - \Pi) g \|_{p} = 0$ to the objective, and recover problem (<ref>) with $\lambda$ larger than the Lipschitz constant of $\ctr(\cdot)$. Hence, (<ref>) is equivalent to (<ref>) for $\lambda$ sufficiently large. Our final convex relaxation is to choose $\lambda$ small rather than large. Namely, from the view-point of the objective: it lowers the cost; or from the bi-level viewpoint: it turns the inner optimality constraint into a weaker sub-optimality constraint, i.e., we allow for solutions satisfying $\| (I - \Pi) g \|_{p} \geq 0$. Conclusion $(i)$ now follows since (<ref>) is convex; $(ii)$ follows since we have only enlarged the feasible set when passing from (<ref>) to (<ref>); and $(iii)$ follows due to the enlarged feasible set, since the costs of (<ref>) and (<ref>) coincide, and since (<ref>) is a relaxation of (<ref>) if $\lambda$ is not sufficiently large. First, we summarize the salient arguments to pass from indirect to direct data-driven control: we relaxed problem (<ref>) by dropping causality (block-triangularity) and LTI complexity (rank) specifications, replaced the least-square criterion (<ref>) by the equivalent least-norm formulation (<ref>), and lifted the problem from bi-level to multi-criteria, where the least-square objective induces the regularization $\|(I-\Pi)g\|_{p}$. For equivalence to the least-square objective, the proof requires $\lambda$ larger than the (global) Lipschitz constant of $\ctr(\cdot)$, similar to robustification-induced regularizations [42, 43]. If $\ctr(\cdot)$ is only locally Lipschitz, e.g., in case of a quadratic cost, then choosing a finite (small) $\lambda$ is a relaxation that allows the predicted trajectory to not adhere to the least-square fit of the data. Though as we will see in Section <ref>, its effect is minor for $\lambda$ not overly small. Second, continuing on the magnitude of $\lambda$: For exact data and under consistency assumptions, (<ref>) achieves the exact minimizer for any $\lambda \geq 0$; see Fact <ref>. When departing from these ideal assumptions, the least-square fit of the data is enforced only for $\lambda$ sufficiently large. Generally, $\lambda$ should be regarded as a tune-able hyper-parameter chosen by the designer to control how much the predicted trajectory should adhere to the data (versus the control objective) and to ultimately improve the realized performance. The proof of Theorem <ref> suggests a sufficiently large value, which is also confirmed by our later empirical findings (see e.g. Figure <ref>). Third, the regularization based on the projector $\|(I-\Pi)g\|_{p}$ differs from the standard $p$-norm regularizers $h(g) = \|g\|_{p}$ [44, 42, 43] (or squared 2-norms $\|g\|_{2}^{2}$ [45, 40]). Actually, it is this projection which recovers the least-square criterion (<ref>). In contrast, norm-based regularizers $\|g\|_{p}$ are not consistent and bias the optimal solution $(u^{\star},y^{\star})$; see Remark <ref>. This is undesirable from an identification perspective: the regularizer should induce a least-square fit of the data. While for small values of $\lambda$ both regularizers have a similar effect, for sufficiently large $\lambda$ the identification-induced regularizer $\|(I-\Pi)g\|_{p}$ demonstrates a superior performance; see Figure <ref> later. Fourth, our proof strategy reveals an entire class of regularizers. In fact, we can choose any $p$-norm $\|(I-\Pi)g\|_{p}$, use more general penalty functions such as the (squared) merit functions in [53], or attack problem (<ref>) with other penalty or augmented Lagrangian methods. These degrees of freedom reflect the intuition that the Pareto-front of (<ref>) is invariant under certain (e.g., monotone) transformations of objectives such as taking squares; see <cit.> for a formal reasoning. For our later simulations in Section <ref>, we choose the computationally attractive regularization $\|(I-\Pi)g\|_{2}^{2}$. Fifth and finally, our proof arguments are obviously “qualitative” crossing out rank and causality constraints similar to most SPC implementations and using non-quantifiable “sufficiently large” reasoning. Hence, the convex relaxation (<ref>) of (<ref>) should not be expected to be tight. Nevertheless, the formulation (<ref>) (without projector) has proved itself in many case studies and often outperforms (<ref>), as testified in [46, 47, 48, 49, 45]. Section <ref> will compare the different formulations. §.§ Bridging Towards Structured Low-Rank Approximation We now present an entirely non-parametric problem formulation, namely a version of subspace identification based on structured low-rank approximation [39], and we relate the resulting bi-level problem to direct data-driven control (<ref>). Given the model class $\lti_{m,\ell}^{q,n}$, we project the identification data $w_{d} \in \real^{qT}$ on $ \widehat\bv_{\Tini+\Tr} \in \lti_{m,\ell}^{q,n}$. By Lemma <ref>, the latter set is characterized by all trajectories $\hat w \in \real^{q(\Tini+\Tr)}$ so that the associated Hankel matrix satisfies $\text{rank} \left(\Han_{\Tini+\Tr}(\hat w) \right) \leq m(\Tini+\Tr)+n$ for $(\Tini+\Tr) > \ell$. An implicit assumption is, of course, $\Td \gg \Tini+\Tr$: the identification data is much longer than the estimation plus control prediction horizons. In presence of noise, $\Han_{\Tini+\Tr}( w_{d}) $ will not have low rank and has to be approximated by a low-rank matrix in an identification step. Thus, the identification problem (<ref>) reads as rank (_+(ŵ_d) ) ≤ m(+)+n. Problem (<ref>) is to be read as low-rank approximation problem: given the identification data assorted in a Hankel matrix $\Han_{\Tini+\Tr}(w_{d})$, we seek the closest sequence $\hat w_{d}$ so that the Hankel matrix $\Han_{\Tini+\Tr}(\hat w_{d})$ has rank no more than $m(\Tini+\Tr)+n$. Since $\hat w_{d} \in \widehat \bv_{T}$, we have $\text{rank} \left(\Han_{\Tini+\Tr}(\hat w_{d}) \right) {\leq} m(\Tini+\Tr)+n$. Since also $\wini \in \widehat\bv_{\Tini}$ and $w \in \widehat\bv_{\Tr}$, we conclude \begin{equation*} \text{rank} \left(\left[\Han_{\Tini+\Tr}(\hat w_{d}) \,~\, \text{col}(\wini,w) \right]\right) {\leq} m(\Tini+\Tr)+n \,. \end{equation*} Assuming that $\text{rank} \left(\Han_{\Tini+\Tr}(\hat w_{d})\right) = m(\Tini+\Tr)+n$, which is generically the case, $\Han_{\Tini+\Tr}(\hat w_{d}) g = \text{col}(\wini,w) $ for some vector $g$. Hence, the bi-level problem (<ref>) takes the form w ∈𝒲,g(w-w_r) = _+(ŵ_d^⋆)g ŵ_d^⋆ ∈_ŵ_d ( ŵ_d-w_d) (_+( ŵ_d)) = m(+)+n Consider the indirect data-driven control problem (<ref>) and the direct data-driven control problem (<ref>) for $h(g) = \|g\|_{1}$ and parameterized by $\lambda\geq0$. Let Assumptions <ref> and <ref> hold. For $\lambda$ sufficiently small, (<ref>) is a convex relaxation of (<ref>), that is, $(i)$ (<ref>) is convex, $(ii)$ any feasible $(w,g)$ in (<ref>) is also feasible for (<ref>), and $(iii)$ the optimal value of (<ref>) lower-bounds that of (<ref>). To prove the claim, one can resort to a proof strategy via the multi-criteria problem (<ref>), as in the previous section. Instead, we present a more direct approach here. We start by massaging the rank constraint in (<ref>). First, since $ \text{rank} \left(\Han_{\Tini+\Tr}(\hat w_{d})\right) = m(\Tini+\Tr)+n$, we may without loss of generality add the constraint $\|g\|_0 \leq n+ m(\Tini+\Tr)$ to the outer problem, where $\|g\|_0$ denotes the cardinality (number of nonzero entries) of $g$. Second, we perform a convex relaxation and drop the rank constraint. Third, another convex relaxation (popular in LASSO problems [55]) is to replace $\|g\|_0 \leq n+ m(\Tini+\Tr)$ by $\|g\|_{1} \leq \alpha$ for $\alpha>0$ sufficiently large. As a result of these three steps, (<ref>) is relaxed to = _+(ŵ_d^⋆)g , g_1 ≤α ŵ_d^⋆ ∈_ŵ_d ( ŵ_d-w_d) Observe that under Assumption <ref> the inner problem admits a trivial solution: $\hat w_{d}^{\star}=w_{d}$. Thus, (<ref>) reduces to w∈𝒲,g( w-w_r) = _+( w_d)g , g_1 ≤α. Next, we lift the 1-norm constraint to the objective w∈𝒲,g( w-w_r) + λ·g_1 = _+( w_d)g where $\lambda\geq0$ is a scalar weight. In particular, for each value of $\alpha$ in (<ref>), there is $\lambda\geq0$ so that the solution of (<ref>) coincides with (<ref>), and vice versa. These equivalences are standard in $\ell_{1}$-regularized problems and follow from strong duality (applicable since $\ctr(\cdot)$ is convex and Slater's condition holds) [55]. The precise value of $\lambda$ depends on the Lagrange multiplier of the constraint $\|g\|_1 \leq \alpha$ and thus on the data. In either case, there is a selection of parameters so that both problems are equivalent, and choosing $\lambda$ sufficiently small is a relaxation. Thus, we arrived at the direct data-driven control (<ref>) for $\lambda$ sufficiently small and $h(g) = \|g\|_{1}$. Conclusion $(i)$ follows due to convexity (<ref>); $(ii)$ follows since we have enlarged the feasible set passing from (<ref>) to (<ref>); and $(iii)$ follows due to the enlarged feasible set, since the costs of (<ref>) and (<ref>) coincide, and since (<ref>) is a relaxation of (<ref>) for $\lambda$ small. In summary, to pass from indirect data-driven control (<ref>) to direct data-driven control (<ref>), we performed a sequence of convex relaxations effectively replacing the rank constraint of the system identification by a $\ell_{1}$-norm regularizer. Hence, the 1-norm regularizer accounts for selecting the model complexity. Similar remarks as those following Theorem <ref> on tightness of the relaxation apply to Theorem <ref>, too; see Remark <ref>. §.§ Hybrid relaxations Theorems <ref> and <ref> reveal the roles of the two regularizers: $\|g\|_{1}$ controls the model complexity, whereas $\|(I-\Pi)g\|_{2}$ accounts for least-square fitting the data. To blend the two, consider a hybrid formulation of (<ref>) and (<ref>) w∈𝒲,g(w-w_r) + λ_1 ·(I-Π)g^2_2 = _+(ŵ_d^⋆)g ŵ_d^⋆ ∈_ŵ_d ( ŵ_d-w_d) (_+( ŵ_d)) = m(+)+n where $\lambda_{1} \geq 0$. Observe that this formulation is consistent: Under Assumptions <ref> with $L$ replaced by $\Tini+L$, <ref>, <ref>, and <ref>, for any $\lambda \geq 0$ the minimum of (<ref>) and achieved for $w^{\star} = w_{r}$ and $\|(I-\Pi)g^{\star}\|^{2}_{2}=0$. The arguments in the previous section then lead us to w ∈𝒲,g( w-w_r) + λ_1 ·(I-Π)g^2_2 + λ_2 ·g_1 = _+( w_d)g where $\lambda_{2} \geq 0$. We will validate the performance of the hybrid regularizer in Section <ref> below; see specifically Figure <ref>. §.§ Possible pitfalls of relaxations Note that the two convex relaxation results in Theorems <ref> and <ref> are trivially true in the limit when $\lambda = 0$. In fact, even the abstract multi-criteria formulation (<ref>) can be related to a relaxation of the abstract bi-level problem (<ref>) in the limit $\gamma = 0$. Namely, for $\gamma = 0$, (<ref>) reduces to w,ŵ_d,( w-w_r) ∧w ∈_+ , ŵ_d ∈_ , ∈_m,ℓ^q,n The variable $\hat w_{d}$ and the constraint $\hat w_{d} \in \widehat\bv_{\Td}$ can be removed, and (<ref>) amounts to matching the model $\widehat\bv$ to the reference $w_{r}$. The next result is followed by a discussion on regularizers: Consider the indirect data-driven control (<ref>) and multi-criteria problem (<ref>) in the limit $\gamma = 0$, and let Assumption <ref> hold. Then problem (<ref>) is a relaxation of problem (<ref>), that is, $(i)$ any feasible $( w,\hat w_{d},\hat \bv)$ in (<ref>) is also feasible for (<ref>), $(ii)$ and the optimal value of (<ref>) lower-bounds that of (<ref>). Consider the equivalent formulation (<ref>) of (<ref>), and note that (<ref>) equals (<ref>) when the inner optimality constraint $\cid(\hat w_{d}-w_{d}) - \varphi = 0$ is dropped. The conclusions now follow analogously as in Theorems <ref> and <ref>. Analogous corollaries can be stated for Theorems <ref> and <ref> for $\lambda = 0$. Given such results, one may wonder whether Theorems <ref> and <ref> are vacuous since they are trivially true for $\lambda = 0$. We offer several answers. First, the limit $\lambda =0$ clearly leads to a better solution $w^{\star}$ (i.e., a lower surrogate tracking error) for the open-loop optimal control problem. However, this solution merely matches the reference $w_{r}$ and does not adhere to the identification data $w_{d}$ in the sense of meeting any fitting criterion. Hence, the optimal solution $w^{\star}$ may not be a trajectory of the true system behavior, and the actual realized control performance can be arbitrarily poor. Obviously, such a situation is not desirable, and one may want to regularize with a small but non-zero $\lambda$ – an observation consistent with [41, 42, 43, 44, 45, 46] albeit derived from a different perspective. Second, Theorems <ref> and <ref> require $\lambda$ to be sufficiently small, but not zero. According to the proofs, depending on Lipschitz constants and multipliers of the respective problems, there is a smallest value for $\lambda$ so that the behavior $\widehat\bv$ matches (in the $\cid(\cdot)$ fitting criterion) the plant behavior $\bv^{P}$. In [41, 42, 43, 44, 45, 46] the coefficient $\lambda$ relates to a desired robustness level. In either case, $\lambda$ can hardly be quantified a priori and without cross-validation; see also Remark <ref>. We follow up on this set of questions in the next section. § NUMERICAL ANALYSIS AND COMPARISONS We now numerically investigate the effect of the hyper-parameter $\lambda$, confirm the superiority of the regularizer $h(g) = \|(I-\Pi)g\|_{2}^{2}$, and compare direct and indirect approaches. §.§ Choice of Regularization Parameter We first study the parameter $\lambda$ regularizing direct data-driven control (<ref>). Consider the benchmark single-input, single-output, 5th order, linear time-invariant system [56]. Denoting the $t$-th element of the concatenated input and output by ${w}(t)=({u}(t),{y}(t))$, the control cost was chosen as $c_\textup{ctrl}({w}-w_r) = ({w}-w_r)^{\top}W({w}-w_r)$ with reference $w_r(t) = (u_r(t),y_r(t)) = (0,\sin(2\pi t/(L-1)))$ for $t\in\{0,1,\dots,L-1\}$, prediction horizon $L=20$, $W=I_L\otimes\textup{diag}(0.01,2000)$, where $I_L$ is the $L \times L$ identity, and $\otimes$ denotes the Kronecker product. In this entire section, we disregard constraints, i.e., $\mathcal W \equiv \real^{q\Tr}$. We used a 1-norm regularizer $h(g) = \|g\|_{1}$ in (<ref>) and a prefix-trajectory of length $\Tini=5$ (see Section <ref>). We collected one noise-free input/output time series of length $T=250$ by applying a random Gaussian input. From this noise-free data set, 100 independent noisy data sets were constructed by adding Gaussian noise with a noise-to-signal ratio of 5%. For each data set and each value of $\lambda\in(0,10^3)$, optimal control inputs were computed from (<ref>). We define the predicted error as $c_{\textup{ctrl}}(w^{\star}-w_r)$, where $w^{\star}$ is an optimizer of (<ref>). We define the realized error as $c_{\textup{ctrl}}(w_{\textup{true}}-w_r)$, where $w_{\textup{true}}$ is the realized trajectory of the system after applying the computed optimal inputs. The predicted and realized errors were converted to a percentage increase in error with respect to the ground-truth optimal performance (i.e., if the deterministic system was exactly known), and were averaged over the 100 independent data sets. The results are plotted in Figure <ref>. It is apparent that choosing $\lambda$ too small leads to an optimistic predicted error but very poor realized performance. Furthermore, the performance is poor for large values of $\lambda$ indicating that the regularization parameter should be chosen carefully (though a wide range delivers equally good results). These observations are consistent with those in [41, 44, 42, 43, 45, 46] and the hypotheses discussed at the end of Section <ref>. Predicted and realized errors (relative to the ground-truth optimal performance and averaged over 100 data sets) with 1-norm regularizer $\lambda \|g\|_{1}$. §.§ Role of Projection in Two-Norm Regularization Theorem <ref> suggests that the identification-induced regularizer $h(g) = \|(I-\Pi)g\|_{2}^{2}$ is superior to a two-norm regularizer $h(g) = \|g\|_{2}^{2}$ if one is interested in consistency and the predicted trajectory adhering to a least-square fit of the data. To test this hypothesis, we consider the same case study from Section <ref> and report the averaged cost in Figure <ref>. Comparison of the realized performance (relative to the ground-truth optimal performance and averaged over 100 data sets) for the two-norm $\|g\|_{2}^{2}$ and identification-induced regularization $\|(I-\Pi)g\|_{2}^{2}$ as function of $\lambda$. Both regularizers perform similarly for small $\lambda$, but the identification-induced regularizer shows a superior and surprisingly constant performance for sufficiently large $\lambda$. By the proof of Theorem <ref>, for $\lambda$ sufficiently large, the direct and indirect problems (<ref>) and (<ref>) are equivalent – up to causality and complexity constraints. Thus, a sufficiently large $\lambda$ forces the least-square fit (<ref>) and results in excellent performance independent of the specific value of $\lambda$. While there is a small window where the two-norm excels, the identification-induced regularizer shows overall much more robust performance. Realized error (relative to the ground-truth optimal performance and averaged over 100 data sets) for a hybrid regularizer $\lambda_{1} \|(I-\Pi)g\|_{2}^{2} + \lambda_{2} \|g\|_{1}$ Next we study the merits of hybrid regularization (<ref>). For the same case study Figure <ref> shows the averaged realized performance plotted over the regularization parameters. The $\{\lambda_{1}=0\}$ and $\{\lambda_{2}=0\}$ slices recover Figures <ref> and <ref>. As before, the regularizer $ \|(I-\Pi)g\|_{2}^{2}$ is more robust though a hybrid regularizer yields a minor albeit robust improvement. A closer examination of the data underlying Figure <ref> reveals that a hybrid regularization can improve up to 15% over the best results achievable with the regularizer $ \|(I-\Pi)g\|_{2}^{2}$ only. §.§ Effect of data length We continue with the same case study and discuss the effect of data-length on direct and indirect methods. For the direct method, we used the identification-induced regularizer $h(g) = \|(I-\Pi)g\|_2^2$ with sufficiently large weight $\lambda = 10000$, as indicated in Figure <ref>. For the indirect method, the inner system identification problem (<ref>) is solved using the subspace approach N4SID [26] with prefix horizon $\Tini=5$, prediction horizon $L=20$, and (correct) model-order selection $n=5$. For our case study Lemma <ref> demands at least $T = 59$ data points. Figure <ref> below shows the beneficial effects of including more data on the realized median performance of the direct and indirect methods. The main findings are as follows: First, both methods are asymptotically consistent. Second, the indirect method is superior in the low data regime echoing that models are compressed and de-noised representations, see Remark <ref>. Third and finally, when an incorrect model-order $n=6$ is selected for the indirect method (resulting in an over-parameterization and thus a bias), then consistency is lost, and the direct method is superior. This effect is even more pronounced when studying the average (as opposed to the median) error due to several outliers of the indirect method. This third point hints at a bias-variance trade-off between the direct and indirect methods, which will be studied below. Realized median error (over 100 data sets) for the direct and indirect (with different model order selections) methods for varying amount of data. §.§ Comparison and Bias-Variance Hypotheses We now compare the direct and indirect approaches through two case studies. The first study evaluates the performance of both methods on the basis of “variance” error, i.e., on a linear system with noisy measurements. The second study evaluates the performance on the basis of “bias” error, i.e., on a nonlinear system with noise-free measurements. We expect the direct method to perform better on the nonlinear system since the indirect method erroneously selects a linear model class thus leading to a larger “bias” error. On the other hand, we expect the indirect method to perform better on the linear system with noisy outputs since the identification step filters noise thus leading to a lower “variance” error. §.§ Comparison: Stochastic Linear System Consider the same case study as in the Section <ref>, i.e., same LTI system, cost, and reference. We collected data for varying levels of noise-to-signal ratio, i.e., we considered measurements that were affected by Gaussian noise with noise-to-signal ratio in the set $\{0\%,1\%,\dots,15\%\}$. For each noise-to-signal ratio, $T=250$ input/output data samples were collected by applying a random Gaussian input. This data was then used for both the direct and indirect methods. For the indirect method, the inner system identification problem (<ref>) is again solved using N4SID [26] with prefix horizon $\Tini=5$ and prediction horizon $L=20$. Equipped with a (correct) 5th-order identified model, optimal control inputs are computed by solving (<ref>). The indirect method was compared to the direct method (<ref>), with $h(g) = \|g\|_{1}$, $\Tini=5$, and $\lambda=27$. The hyper-parameters of both methods were kept constant for all simulations below and chosen to give good realized control performance for all noise-to-signal ratios. For both methods we recorded the realized performance after applying the open-loop inputs and converted it to a percentage error with respect to the best possible performance (i.e., if the deterministic model was exactly known). For each noise-to-signal ratio, 100 simulations were conducted with different random data sets. The results are displayed in the box plot in Figure <ref> and show that both methods perform well for low levels of noise (up to approximately $2\%$ noise-to-signal ratio). As the data becomes noisier, the performance of the direct method degrades significantly, while the performance of the indirect method remains relatively constant. We remark that a slightly better albeit qualitatively similar result is obtained with the regularizer $ \|(I-\Pi)g\|_{2}^{2}$. We attribute these observations to the fact that identification de-noises the data. These results confirm our hypothesis that the indirect method is superior in terms of “variance” error. Comparison of direct and indirect methods for varying noise. §.§ Comparison: Deterministic Nonlinear System We now consider the scenario where the direct and indirect methods are subject to a “bias” error, but not a “variance” error. Consider the discrete-time nonlinear Lotka-Volterra dynamics considered for direct data-driven control in [57] \[ \begin{aligned} &= f_{\textup{nonlinear}}(x(t_k),u(t_k)) \\ x_1(t_k) + \Delta t(ax_1(t_k) - bx_1(t_k)x_2(t_k)) \\ x_2(t_k) + \Delta t(dx_1(t_k)x_2(t_k) -cx_2(t_k) + u(t_k)) \end{smallmatrix}\right]\,, \end{aligned} \] where $t_{k+1} - t_k = \Delta t = 0.01$, $a=c=0.5$, $b=0.025$, $d=0.005$, and $x(t_k) = \begin{bmatrix} x_1(t_k) & x_2(t_k)\end{bmatrix}^{\top}$. Here, $(x_1(t_k),x_2(t_k))$ denote prey and predator populations, and $u(t_k)$ is the input. A linearization about the equilibrium $(\bar{u},\bar{x}_1,\bar{x}_2)=(0,c/d,a/b)$ yields the affine linear system \[ \begin{aligned} = f_{\textup{linear}}(x(t_k),u(t_k),\bar{x}_1,\bar{x}_2) \\ &= \left[\begin{smallmatrix} x_1(t_k) + \Delta t\left((a-b\bar{x}_2)(x_1(t_k)-\bar{x}_1) - b\bar{x}_1(x_2(t_k)-\bar{x}_2)\right) \\ x_2(t_k) + \Delta t\left(d\bar{x}_2(x_1(t_k)-\bar{x}_1) +(d\bar{x}_1 -c)(x_2(t_k)-\bar{x}_2) + u(t_k)\right) \end{smallmatrix}\right]\,. \end{aligned} \] We expect direct data-driven control (<ref>) to perform well on such a nonlinear system for two reasons: $(i)$ nonlinear systems can be well approximated by LTI systems of sufficiently high complexity; and $(ii)$ the direct method (<ref>) does not specify the LTI system complexity (e.g., by enforcing rank constraints). We compare the direct and indirect methods for varying degree of nonlinearity by interpolating between $f_{\textup{nonlinear}}$ and $f_{\textup{linear}}$, i.e., we study the interpolated system \begin{equation} \begin{aligned} x(t_{k+1}) &= \epsilon \cdot f_{\textup{linear}}(x(t_k),u(t_k),\bar{x}_1,\bar{x}_2)\\ &\quad + (1-\epsilon) \cdot f_{\textup{nonlinear}}(x(t_k),u(t_k)) \end{aligned} \label{eq:interpolated_dynamics} \end{equation} for $\epsilon\in [0,1]$. For $\epsilon=1$ (resp. $\epsilon=0$), the dynamics are purely affine (resp. nonlinear). For each $\epsilon\in\{0,0.1,\dots,1\}$, $T=2415$ data points were collected by applying a noisy sinusoidal input $u(t_k) = 2(\sin(t_k)+\sin(0.1t_k)))^2 + v(t_k)$ with $v(t_k)$ sampled from a Gaussian random variable. Full state measurement was assumed. The data collection was repeated for 100 different initial conditions. For each degree of nonlinearity $\epsilon\in\{0,0.1,\dots,1\}$ and each initial condition, the data was used to compute optimal open-loop control inputs using direct and indirect methods. The control cost was chosen as $c_\textup{ctrl}({w}-w_r) = \| {w}-w_r\|_{2}^{2}$ with equilibrium reference $w_r = (0,100,20,\dots,0,100,20)$, $L=600$, and $w=(u,x)$. For the indirect method, the inner system identification optimization problem given by (<ref>) is solved using the subspace approach N4SID [26] with initial condition horizon $\Tini=4$, and prediction horizon $L=600$. A model order of 4 was chosen, as it produced the best performance as measured by the realized control cost. Optimal control inputs were then computed by solving (<ref>). For comparison, we chose the direct method (<ref>) with $h(g) = \|g\|_{1}$, $\Tini=4$, and $\lambda=8000$. The performance was measured with the realized control cost after applying the open-loop inputs to system (<ref>). As before, the hyper-parameters of both direct/indirect methods were judicially chosen and kept constant for all simulations. Comparison of direct and indirect methods for varying nonlinearity. The results displayed in Figure <ref> show that both methods perform well for low levels of nonlinearity: $\epsilon \in [0.7, 1]$. As the system becomes increasingly nonlinear, the performance of the indirect method degrades significantly, while the performance of the direct method remains relatively constant. We attribute this observation to the fact that the indirect method incurs a “bias” error from selecting a linear model class and applying certainty-equivalence control, while the direct method uses data from the nonlinear system without bias. These findings confirm our earlier bias-variance observations from Figure <ref>. § DISCUSSION AND CONCLUSIONS We studied the relationship between indirect and direct data-driven control formulated as bi-level (first-identify, then control) and single-level regularized (based on the Fundamental Lemma) optimization problems, respectively. An intermediate multi-criteria problem allowed us to efficiently transition between both formulations. We concluded that the regularized direct approach can be viewed as a convex relaxation of the indirect approach, where the choice regularizer depended on the problem formulation and accounted for an implicit identification step. We also discovered a novel regularizer that is consistent and accounts for least-square identification. Our results suggested the use of the indirect method in case of “variance” errors and the use of the direct method in presence of “bias” errors (e.g., a nonlinear system or when selecting a wrong model order). These insights echo the bias-variance trade-offs previously encountered for direct and indirect methods in [20, 51], and they shed some partial light on the remarkable empirical performance of (direct) data-enabled predictive control applied to nonlinear systems. As a limitation, our results concern only the open-loop predictive control problem, though we ultimately care about the realized performance, especially in a receding horizon closed-loop implementation. Some preliminary results on the realized performance of regularized control formulations were obtained in [46] through the lens of robust optimization, but the topic remains largely open. Moreover, we believe that the proposed multi-criteria data-driven control formulation is important in its own right and may deliver excellent performance if one were to find a convex formulation and appropriate trade-off parameter. Both of these are formidable tasks for future work. Finally, we believe that our approach is also applicable to other identification and control formulations and may deliver interesting and novel direct data-driven control formulations. § ACKNOWLEDGEMENTS The authors acknowledge their colleagues at ETH Zürich, in particular Miguel Picallo Cruz, for fruitful discussions. Consider the mathematical program (MP) x ∈Cf(x)MP: g(x) ≤0 h(x) = 0 where $C \subset \real^{n}$ is closed, and $f,g,h$ are lower semicontinuous maps from $\real^{n}$ to $\real$, $\real^{m}$, and $\real$. Consider the perturbation x ∈Cf(x)MP_ϵ: g(x) ≤0 h(x) = ϵ for $\epsilon \in \real$. We recall the definition of partial calmness [53]: Let $x^{\star}$ solve $\boldsymbol{MP}$, and let $\mathbb B_{n}$ denote the open unit ball in $\mathbb R^{n}$. Then $\boldsymbol{MP}$ is said to be partially calm at $x^{\star}$ provided that there are $\mu>0$ and $\delta>0$ such that, for all $\epsilon \in \delta \mathbb B_{1}$ and all $x \in x^{\star}+\delta \mathbb B_{n}$ feasible for $\boldsymbol{MP}_{\epsilon}$, one has \begin{equation} f(x) + \mu | h(x) | \geq f(x^{\star}) \label{eq: calmness} \,. \end{equation} Partial calmness is equivalent to exact penalization. In particular, consider for $\mu \geq 0$ the penalized mathematical program x ∈Cf(x) + μ·|h(x)|PMP_μ: g(x) ≤0 We summarize <cit.>,<cit.>: Assume that $f$ is continuous, and let $x^{\star}$ be a local minimizer of $\boldsymbol{MP}$. Then $\boldsymbol{MP}$ is partially calm at $x^{\star}$ if and only if there is $\mu^{\star} >0$ so that $x^{\star}$ is a local minimizer of $\boldsymbol{PMP}_{\mu}$ for all $\mu \geq \mu^{\star}$. Moreover, any local minima of $\boldsymbol{PMP}_{\mu}$ with $\mu > \mu^{\star}$ are also local minima of $\boldsymbol{MP}$. Partial calmness has been studied for a range of problems, particularly bi-level problems [52, 53, 58]. Of specific importance to us is a related result due to Clarke <cit.> which allows for exact penalization (or equivalently partial calmness) and reads in our notation as follows. Consider the mathematical program $\boldsymbol{MP}$ and its penalized version $\boldsymbol{PMP}_{\mu}$. Assume that $f$ is Lipschitz continuous with Lipschitz constant $L$, the equality constraint takes the form of a distance to a closed set $S \subset C$, \begin{equation*} 0 = h(x) = \text{distance}(x,S) = \text{inf}_{y \in S} \|x-y\| \,, \end{equation*} and $\boldsymbol{MP}$ attains a minimum. Then for $\mu > \mu^{\star} = L$, any local minimum of $\boldsymbol{PMP}_{\mu}$ is also a local minimum of $\boldsymbol{MP}$. We note that $\|\cdot\|$ in Proposition <ref> can be an arbitrary norm. For a more general problem setup with a parametric set $S$ depicting the value function of an (inner) optimization problem, the reader is referred to <cit.>. Additionally, the setup can be extended to (squared) merit functions as penalty functions <cit.>. These generalize the notion of distance but are easier to formulate and compute. [1] L. Hewing, K. P. Wabersich, M. Menner, and M. N. Zeilinger, “Learning-based model predictive control: Toward safe learning in control,” Annual Review of Control, Robotics, and Autonomous Systems, vol. 3, pp. 269–296, [2] G. Pillonetto, F. Dinuzzo, T. Chen, G. De Nicolao, and L. Ljung, “Kernel methods in system identification, machine learning and function estimation: A survey,” Automatica, vol. 50, no. 3, pp. 657–682, 2014. [3] A. Chiuso and G. Pillonetto, “System identification: A machine learning perspective,” Annual Review of Control, Robotics, and Autonomous Systems, vol. 2, pp. 281–304, 2019. [4] Z.-S. Hou and Z. Wang, “From model-based control to data-driven control: Survey, classification and perspective,” Information Sciences, vol. 235, pp. 3–35, 2013. [5] B. Recht, “A tour of reinforcement learning: The view from continuous control,” Annual Review of Control, Robotics, and Autonomous Systems, vol. 2, pp. 253–279, 2019. [6] I. Markovsky and F. Dörfler, “Behavioral systems theory in data-driven analysis, signal processing, and control,” July 2021. [Online]. Available: <http://homepages.vub.ac.be/ imarkovs/publications/overview-ddctr.pdf> [7] H. Hjalmarsson, “From experiment design to closed-loop control,” Automatica, vol. 41, no. 3, pp. 393–438, 2005. [8] S. Meyn, Control Systems and Reinforcement Learning.1em plus 0.5em minus 0.4emCambridge University Press, 2022. [9] H. Hjalmarsson, M. Gevers, S. Gunnarsson, and O. Lequin, “Iterative feedback tuning: Theory and applications,” IEEE Control Systems Magazine, vol. 18, no. 4, pp. 26–41, 1998. [10] M. C. Campi, A. Lecchini, and S. M. Savaresi, “Virtual reference feedback tuning: A direct method for the design of feedback controllers,” Automatica, vol. 38, no. 8, pp. 1337–1346, 2002. [11] A. S. Bazanella, L. Campestrini, and D. Eckhard, Data-driven controller design: the H2 approach.1em plus 0.5em minus 0.4emSpringer Science & Business Media, 2011. [12] H. Hjalmarsson, M. Gevers, and F. De Bruyne, “For model-based control design, closed-loop identification gives better performance,” Automatica, vol. 32, no. 12, pp. 1659–1673, 1996. [13] M. Gevers, “Identification for control: From the early achievements to the revival of experiment design,” European Journal of Control, vol. 11, pp. 1–18, 2005. [14] R. J. Schrama, “Accurate identification for control: The necessity of an iterative scheme,” IEEE Transactions on Automatic Control, vol. 37, no. 7, pp. 991–994, 1992. [15] S. Formentin and A. Chiuso, “CoRe: control-oriented regularization for system identification,” in 2018 IEEE Conference on Decision and Control (CDC).1em plus 0.5em minus 0.4emIEEE, 2018, pp. [16] A. Feldbaum, “Dual control theory problems,” IFAC Proceedings Volumes, vol. 1, no. 2, pp. 541–550, 1963. [17] M. Ferizbegovic, J. Umenberger, H. Hjalmarsson, and T. B. Schön, “Learning robust lq-controllers using application oriented exploration,” IEEE Control Systems Letters, vol. 4, no. 1, pp. 19–24, 2019. [18] C. A. Larsson, A. Ebadat, C. R. Rojas, X. Bombois, and H. Hjalmarsson, “An application-oriented approach to dual control with excitation for closed-loop identification,” European Journal of Control, vol. 29, pp. 1–16, [19] A. Iannelli, M. Khosravi, and R. S. Smith, “Structured exploration in the finite horizon linear quadratic dual control problem,” IFAC-PapersOnLine, vol. 53, no. 2, pp. 959–964, 2020. [20] L. Campestrini, D. Eckhard, A. S. Bazanella, and M. Gevers, “Data-driven model reference control design by prediction error identification,” Journal of the Franklin Institute, vol. 354, no. 6, pp. 2628–2647, 2017. [21] J. C. Willems, “The behavioral approach to open and interconnected systems,” IEEE Control Systems Magazine, vol. 27, no. 6, pp. 46–99, 2007. [22] ——, “Paradigms and puzzles in the theory of dynamical systems,” IEEE Transactions on Automatic Control, vol. 36, no. 3, pp. 259–294, [23] J. C. Willems and J. W. Polderman, Introduction to mathematical systems theory: A behavioral approach.1em plus 0.5em minus 0.4emSpringer, 1997, vol. 26. [24] P. Van Overschee and B. De Moor, Subspace identification for linear systems: Theory, Implementation, Applications.1em plus 0.5em minus 0.4emSpringer Science & Business Media, 2012. [25] T. Katayama, Subspace methods for system identification.1em plus 0.5em minus 0.4emSpringer Science & Business Media, 2006. [26] P. Van Overschee and B. De Moor, “N4SID: Subspace algorithms for the identification of combined deterministic-stochastic systems,” Automatica, vol. 30, no. 1, pp. 75–93, 1994. [27] J. C. Willems, P. Rapisarda, I. Markovsky, and B. L. De Moor, “A note on persistency of excitation,” Systems & Control Letters, vol. 54, no. 4, pp. 325–329, 2005. [28] H. J. van Waarde, C. De Persis, M. K. Camlibel, and P. Tesi, “Willems' fundamental lemma for state-space systems and its extension to multiple datasets,” IEEE Control Systems Letters, vol. 4, no. 3, pp. 602–607, [29] I. Markovsky and F. Dörfler, “Identifiability in the behavioral setting,” September 2020, Submitted. Available at http://homepages.vub.ac.be/ imarkovs/publications/identifiability.pdf. [30] I. Markovsky, J. C. Willems, S. Van Huffel, and B. De Moor, Exact and approximate modeling of linear systems: A behavioral approach.1em plus 0.5em minus 0.4emSIAM, 2006, vol. 11. [31] I. Markovsky, J. C. Willems, P. Rapisarda, and B. L. De Moor, “Algorithms for deterministic balanced subspace identification,” Automatica, vol. 41, no. 5, pp. 755–766, 2005. [32] W. Favoreel, B. De Moor, and M. Gevers, “SPC: subspace predictive control,” IFAC Proceedings Volumes, vol. 32, no. 2, pp. 4004–4009, 1999. [33] S. J. Qin, W. Lin, and L. Ljung, “A novel subspace identification approach with enforced causal models,” Automatica, vol. 41, no. 12, pp. 2043–2053, 2005. [34] B. Huang and R. Kadali, Dynamic modeling, predictive control and performance monitoring: A data-driven subspace approach.1em plus 0.5em minus 0.4emSpringer, 2008. [35] J. Berberich, C. W. Scherer, and F. Allgöwer, “Combining prior knowledge and data for robust controller design,” arXiv preprint arXiv:2009.05253, 2020. [36] H. J. van Waarde, M. K. Camlibel, and M. Mesbahi, “From noisy data to feedback controllers: non-conservative design via a matrix S-lemma,” arXiv preprint arXiv:2006.00870, 2020. [37] C. De Persis and P. Tesi, “Formulas for data-driven control: Stabilization, optimality, and robustness,” IEEE Transactions on Automatic Control, vol. 65, no. 3, pp. 909–924, 2019. [38] I. Markovsky and P. Rapisarda, “Data-driven simulation and control,” International Journal of Control, vol. 81, no. 12, pp. 1946–1959, [39] I. Markovsky, “A missing data approach to data-driven filtering and control,” IEEE Transactions on Automatic Control, vol. 62, no. 4, pp. 1972–1978, 2016. [40] J. Berberich, J. Köhler, M. A. Muller, and F. Allgöwer, “Data-driven model predictive control with stability and robustness guarantees,” IEEE Transactions on Automatic Control, 2020. [41] J. Coulson, J. Lygeros, and F. Dörfler, “Data-enabled predictive control: In the shallows of the DeePC,” in European Control Conference, 2019, pp. 307–312. [42] ——, “Regularized and distributionally robust data-enabled predictive control,” in IEEE Conference on Decision and Control, 2019, pp. [43] ——, “Distributionally robust chance constrained data-enabled predictive control,” 2020, In press. DOI 10.1109/TAC.2021.3097706. [44] A. Xue and N. Matni, “Data-driven system level synthesis,” arXiv preprint arXiv:2011.10674, 2020. [45] L. Huang, J. Zhen, J. Lygeros, and F. Dörfler, “Quadratic regularization of data-enabled predictive control: Theory and application to power converter experiments,” 2020, In press. Available at [46] L. Huang, Z. Jianzhe, J. Lygeros, and F. Dörfler, “Robust data-enabled predictive control: Tractable formulations and performance guarantees,” 2021, Submitted. Available at <https://arxiv.org/abs/2105.07199>. [47] L. Huang, J. Coulson, J. Lygeros, and F. Dörfler, “Decentralized data-enabled predictive control for power system oscillation damping,” IEEE Transactions on Control Systems Technology, 2021, In press. DOI [48] P. G. Carlet, A. Favato, S. Bolognani, and F. Dörfler, “Data-driven predictive current control for synchronous motor drives,” in 2020 IEEE Energy Conversion Congress and Exposition (ECCE), 2020, pp. 5148–5154. [49] E. Elokda, J. Coulson, P. Beuchat, J. Lygeros, and F. Dörfler, “Data-enabled predictive control for quadcopters,” International Journal of Robust and Nonlinear Control, 2019, In press. DOI [50] M. Yin, A. Iannelli, and R. S. Smith, “Maximum likelihood estimation in data-driven modeling and control,” arXiv preprint arXiv:2011.00925, [51] V. Krishnan and F. Pasqualetti, “On direct vs indirect data-driven predictive control,” arXiv preprint arXiv:2103.14936, 2021. [52] J. Ye and D. Zhu, “Optimality conditions for bilevel programming problems,” Optimization, vol. 33, no. 1, pp. 9–27, 1995. [53] J. Ye, D. Zhu, and Q. J. Zhu, “Exact penalization and necessary optimality conditions for generalized bilevel programming problems,” SIAM Journal on optimization, vol. 7, no. 2, pp. 481–507, 1997. [54] H. Xu, C. Caramanis, and S. Mannor, “Robust regression and lasso,” IEEE Transactions on Information Theory, vol. 56, no. 7, pp. 3561–3574, 2010. [55] T. Hastie, R. Tibshirani, and M. Wainwright, Statistical learning with sparsity: the lasso and generalizations.1em plus 0.5em minus 0.4emCRC press, 2015. [56] I. Landau, D. Rey, A. Karimi, A. Voda, and A. Franco, “A flexible transmission system as a benchmark for robust digital control,” European Journal of Control, vol. 1, no. 2, pp. 77–96, 1995. [57] E. Kaiser, J. N. Kutz, and S. L. Brunton, “Sparse identification of nonlinear dynamics for model predictive control in the low-data limit,” Proceedings of the Royal Society A, vol. 474, no. 2219, p. 20180335, [58] P. Mehlitz, L. I. Minchenko, and A. B. Zemkoho, “A note on partial calmness for bilevel optimization problems with linearly structured lower level,” Optimization Letters, pp. 1–15, 2020. [59] F. H. Clarke, Optimization and nonsmooth analysis.1em plus 0.5em minus 0.4emSIAM, 1990. []Florian Dörfler (S'09-M'13-S'21) is an Associate Professor at the Automatic Control Laboratory at ETH Zürich and the Associate Head of the Department of Information Technology and Electrical Engineering. He received his Ph.D. degree in Mechanical Engineering from the University of California at Santa Barbara in 2013, and a Diplom degree in Engineering Cybernetics from the University of Stuttgart in 2008. From 2013 to 2014 he was an Assistant Professor at the University of California Los Angeles. His primary research interests are centered around control, optimization, and system theory with applications in network systems, in particular electric power grids. He is a recipient of the distinguished young research awards by IFAC (Manfred Thoma Medal 2020) and EUCA (European Control Award 2020). His students were winners or finalists for Best Student Paper awards at the European Control Conference (2013, 2019), the American Control Conference (2016), the Conference on Decision and Control (2020), the PES General Meeting (2020), and the PES PowerTech Conference (2017). He is furthermore a recipient of the 2010 ACC Student Best Paper Award, the 2011 O. Hugo Schuck Best Paper Award, the 2012-2014 Automatica Best Paper Award, the 2016 IEEE Circuits and Systems Guillemin-Cauer Best Paper Award, and the 2015 UCSB ME Best PhD award. []Jeremy Coulson (S'09-M'13) is a PhD student with the Automatic Control Laboratory at ETH Zürich. He received his Master of Applied Science in Mathematics & Engineering from Queen's University, Canada in August 2017. He received his B.Sc.Eng degree in Mechanical Engineering & Applied Mathematics from Queen's University in 2015. His research interests include data-driven control methods, and stochastic optimization. []Ivan Markovsky is an Associate Professor at the department ELEC of the Vrije Universiteit Brussel. He received his Ph.D. degree in Electrical Engineering from the Katholieke Universiteit Leuven in February 2005. From 2006 to 2012 he was an Assistant Professor at the School of Electronics and Computer Science of the University of Southampton. He is a recipient of an ERC starting grant "Structured low-rank approximation: Theory, algorithms, and applications" 2010–2015, Householder Prize honorable mention 2008, and research mandate by the Vrije Universiteit Brussel research council 2012–2022. His main research interests are computational methods for system theory, identification, and data-driven control in the behavioral setting.
16k
arxiv_papers
2101.01276
# Starshade Rendezvous: Exoplanet Orbit Constraints from Multi-Epoch Direct Imaging Andrew Romero-Wolf Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA 91109, USA Geoffrey Bryden Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA 91109, USA Greg Agnes Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA 91109, USA Jonathan W. Arenberg Northrop Grumman Aerospace Systems, Redondo Beach, CA 90278, USA Samuel Case Bradford Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA 91109, USA Simone D’Amico Stanford University, Stanford, CA 94305, USA John Debes Space Telescope Science Institute, Baltimore, MD 21218, USA Matt Greenhouse NASA Goddard Space Flight Center, Greenbelt, MD 20771, USA Renyu Hu Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA 91109, USA Steve Matousek Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA 91109, USA Jason Rhodes Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA 91109, USA John Ziemer Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA 91109, USA ###### Abstract The addition of an external starshade to the Nancy Grace Roman Space Telescope will enable the direct imaging of Earth-radius planets orbiting at $\sim$1 AU. Classification of any detected planets as Earth-like requires both spectroscopy to characterize their atmospheres and multi-epoch imaging to trace their orbits. We consider here the ability of the Starshade Rendezvous Probe to constrain the orbits of directly imaged Earth-like planets. The target list for this proposed mission consists of the 16 nearby stars best suited for direct imaging. The field of regard for a starshade mission is constrained by solar exclusion angles, resulting in four observing windows during a two-year mission. We find that for habitable-zone planetary orbits that are detected at least three times during the four viewing opportunities, their semi-major axes are measured with a median precision of 7 mas, or a median fractional precision of 3%. Habitable-zone planets can be correctly identified as such 96.7% of the time, with a false positive rate of 2.8%. If a more conservative criteria is used for habitable-zone classification (95% probability), the false positive rate drops close to zero, but with only 81% of the truly Earth-like planets correctly classified as residing in the habitable zone. planets, imaging ## 1 Introduction The Starshade Rendezvous Probe (SRP) mission concept proposes adding a Starshade to the Nancy Grace Roman Space Telescope enabling the detection of habitable zone exoplanets and characterization of their atmospheres (Seager et al., 2019). Romero-Wolf et al. (2020) described in detail the technical basis for the SRP study report (Seager et al., 2019) along with the simulations used to estimate sensitivity of the observatory. The corresponding software is publicly available for reproduction of the results below and for comparison with similar simulations.111https://github.com/afromero/Starshade_Rendezvous_Probe_sims. The main result of these studies is that SRP is capable of discovering Earth- size planets in the habitable zones of nearby stars using the relatively moderate aperture of the Roman space telescope (Romero-Wolf et al., 2020) along with the Coronagraph Instrument (CGI). While the SRP science objectives include quantifying the amount of habitable zone dust around nearby stars and measuring the metallicity of known gas giant planets, the primary driver is the detection and characterization of Earth- like planets. The overall strategy, described in more detail in Romero-Wolf et al. (2020), involves three main steps: 1) initial detection via direct imaging, 2) habitable zone determination via orbit tracing, and 3) atmosphere characterization via spectroscopy. The integration times necessary to image and to take spectra of Earth-like planets (steps 1 and 3) were taken into account with a model of the observatory. However, step 2 is more complicated since the observatory field of regard is constrained by solar exclusion angles, typically limiting the target observing windows to two $\sim$30 day blocks per year – a total of 4 observing opportunities during the assumed 2-year mission lifetime. Depending on the orientation and phase of a planet’s orbit, it may or may not be visible during each of these 4 observing windows. In Romero-Wolf et al. (2020), we assumed that detecting the planet during at least 3 of the 4 epochs would be sufficient to determine if a planet lies within its parent star’s habitable zone. In this paper, we consider this step in more detail, performing multi-epoch orbit fitting for the target list and expected signal-to-noise given by the Romero-Wolf et al. (2020) observatory model. Measurement of planetary orbits has been previously modeled for several types of observations – radial velocity, astrometric wobble, and coronagraphic direct imaging. Examples include: 1) Mawet et al. (2019) combining radial velocity measurements with direct imaging upper limits to improve the orbital fit for the planet eps Eri b, 2) Ford (2006) quantifying the robustness of planetary orbit determination via the stellar astrometric signal, concentrating on the difficulty posed by multi-planet systems, and 3) Guyon et al. (2013) considering simultaneous stellar astrometry and direct imaging, finding that Earth-like planets can be characterized in just a few observations. Among the studies that, like this paper, concentrate on direct imaging, Blunt et al. (2017) performed a detailed analysis of orbital constraints based on the small fraction of an orbit that is traced by known long-period direct-imaged planets. For theoretical cases where the observations span at least half an orbital period, Horning et al. (2019) find that three equally spaced observations with SNR $\geq$ 10 can measure the semi-major axis and eccentricity to 10%. Guimond & Cowan (2019) also consider direct imaging of shorter period planets, finding that a habitable zone planet’s semi-major axis can be measured to within 5% if it is observed with precision 3.5 mas over three epochs each spaced by at least 90 days apart. Here we consider the results obtainable by a specific mission concept – the Starshade Rendezvous Probe. Unlike previous work, this includes: 1) a realistic signal-to-noise calculation as a function of stellar illumination, rather than an assumed astrometric precision, 2) a starshade-specific inner working angle that obscures planet images close to the star, and 3) observing windows based on known target sky positions and observatory pointing constraints, not just arbitrarily spaced images. Furthermore we focus here not on general orbit fitting results, but on a specific science question – whether or not we can determine if a planet lies in its parent star’s habitable zone. In the following, we first summarize the observatory model from Romero-Wolf et al. (2020) used to calculate the signal-to-noise for each planet image §2. For many sets of simulated observations, we then extract orbital elements for each injected planets §3. We give the resulting precision for the orbital fits in §4 and summarize in §5. ## 2 Observing Model We briefly summarize the models used for planet properties, orbit propagation, and the observatory. A detailed presentation of these models can be found in Romero-Wolf et al. (2020). ### 2.1 Targets Planet sizes and orbital periods are drawn randomly from these defined ranges for Earth-like planets, based on the distribution defined by SAG-13 (Belikov & et al., 2017) and modified by HabEx to include the dependence of the orbital semi-major axis on the lower limit of planet radii. The orbital period $P$ defines the orbital radius $a_{p}$ by way of the stellar mass $M_{\star}$ using Kepler’s third law. For sampling of Earth-like exoplanets, the orbits are assumed to be circular, consistent with most previous studies, e.g. Stark et al. (2016). However, when fitting, we allow eccentricity to be a free parameter (see §3). The orbital radii are sampled over a range from inside the inner habitable zone (defined as 0.95$\sqrt{L_{\star}}$, where $L_{\star}$ is the stellar luminosity relative to the Sun) to outside the outer habitable zone (defined as 1.67$\sqrt{L_{\star}}$; Kasting et al., 1993). The range of planet radii considered is bounded above at $r_{pl}\leq 1.4$ $r_{\oplus}$, based on evidence that suggests that planets with radius below are predominantly rocky (Rogers, 2015). The lower bound on terrestrial planet radii depends on the planet’s ability to retain an appreciable atmosphere, which, in turn depends on their stellar illumination. This results in a dependence on the planet’s semi-minor axis $a_{\rm p}$, modified by the stellar luminosity to give $r_{pl}\geq 0.8a^{1/2}_{p}/L^{1/4}_{\star}$ (Zahnle & Catling, 2017). We model Earth-like exoplanets assuming the scatter light isotropically using a Lambertian illumination phase function with a geometric albedo of 0.2 (Robinson & Reinhard, 2018). We model the star as a blackbody radiator with the parameters provided in ExoCat (Turnbull et al., 2012). We include obscuring dust, both in the target system (exozodiacal dust with a fiducial value of 4.5 zodi (Ertel et al., 2020)) and locally in the Solar System (zodiacal dust using the model of Leinert et al. (1998)). Assumed planet and dust characteristics are summarized in Table 1. Table 1: Planet and Dust Parameters Parameter | Value (or [Range]) ---|--- Earth-like planet geometric albedoa | 0.2 Earth-like planet radiusb | $[0.8a_{\rm p}^{0.5}-1.4]$ $r_{\oplus}$ Habitable zone | $[0.95-1.67]\sqrt{L_{\star}}$ Zodiacal dust brightness | Leinert et al. (1998) Exozodi dust brightnessc | 4.5 zodi aafootnotetext: For the assumed isotropic scattering, this geometric albedo is equivalent to 0.3 spherical albedo. bbfootnotetext: The semi-major axis $a_{\rm p}$ is modified by a factor of $\sqrt{L_{\star}}$ to account for stellar irradiance. ccfootnotetext: The unit of 1 zodi is equivalent to 22 mag/arcsec2. While the use of an occulting starshade does allow for detection of fainter and closer planets, it also puts constraints on the allowed times of observation. The telescope-starshade system has a region of allowed Sun angles over which it can operate with the lower limit defined by the exclusion angle from the baffle of the telescope and the outer limit defined by reflection and scattering of sunlight off the starshade into the telescope baffle (Table 2). ### 2.2 Observatory The integration times are based on the Starshade/Roman system parameters provided in Table 2. Roman has a telescope diameter of $2.4$ m resulting in a point spread function of 65 mas at 750 nm wavelength. We assume observations in the 615 – 800 nm band with an end-to-end efficiency, including optical throughput and detector efficiency, of 3.5% in imaging mode. The Starshade has an inner working angle (IWA) of 100 mas, below which planets are assumed to not be observable. The instrument contrast at the IWA and above is assumed to be $4\times 10^{-11}$, as calculated by Seager et al. (2019). Integration times vary between 1 – 6.3 days, depending on the target (explained below). Besides the sensitivity, determined by the parameters given above, the main constraints to orbit reconstruction are the nominal mission lifetime of 2 years and the solar exclusion angles. The minimum solar exclusion angle of $54^{\circ}$ is determined by the telescope baffle while the maximum solar exclusion angle of $83^{\circ}$ is determined by scattering off the edge of the Starshade. For Roman’s orbit around the L2 Sun-Earth Lagrange point, the calculated observing windows are shown in Figure 1. During the 2-year lifetime of the mission, there are generally 4 opportunities to observe each target. It is assumed that targets with Earth-like planet candidates, following the decision tree laid out in Romero-Wolf et al. (2020), will be visited once in each of the 4 observing windows available. While spectral characterization can be performed with only a single visit with favorable illumination phase, multiple epochs are needed to constrain the planet’s orbit, in particular its semi-major axis, which determines whether the planet is in the habitable zone. To best trace the orbit, the timing of the observations should be as evenly spaced as possible over the orbital period, given the limited observing windows and mission lifetime. Table 2: Mission Parameters Parameter | Assumed Performance ---|--- Mission lifetime | 2 years Telescope primary mirror | 2.4 m Imaging resolution | 65 mas at 750 nm Imaging bandpass | 615 – 800 nm Imaging end-to-end efficiency | 0.035 Solar exclusion angle (min) | 54∘ Solar exclusion angle (max) | 83∘ Inner working angle (IWA) | 100 mas Instrument contrast | $4\times 10^{-11}$ Imaging integration time | 1 – 6 days The properties of the target stars are shown in Table 3. In Romero-Wolf et al. (2020) we defined the single-visit completeness as the probability that an Earth-like planet would be detected during one target observation at a random time. The imaging integration time for each target is set by identifying what is required to reach a single-visit completeness of at least 50% with a minimum value of 1 day and a maximum value of 6.3 days (see Table 3). We also defined the orbital completeness as the probability that a randomly selected orbit will be detectable (S/N $\geq$ 7) during at least 3 of its 4 observing epochs. For most of the targets about half of the planet orbits meets this criterion, but the orbital completeness can sometimes fall below 20% for systems where the planet signal is relatively weak (again, see Table 3). In the next section, we calculate whether 3 detections in 4 visits is sufficient to determine whether a planet lies within its habitable zone. Figure 1: Target star observing windows (reproduced from Romero-Wolf et al. (2020)) resulting from the telescope and starshade solar exclusion angles. Each target typically has two $\sim$30-day-long observing windows per year. Targets at high ecliptic latitude can have longer observing windows per year. The black dots mark the four desired observation start times in a two-year period. This is driven by the need to allow for sufficient time to spectrally characterize a planet if it is bright enough. Table 3: Astrometric precision for Earth-like planets | Habitable Zone | | Astrom. Precisione | Habitable Zone classification ---|---|---|---|--- Star Name | Distance | $V$ | $L_{\star}$ | $M_{\star}$ | $a_{p}$ | Period | Int. Time | Completeness | per epoch | $a_{p}$ | 50% Threshold | 95% Threshold | (pc) | (mag) | ($L_{\odot}$) | ($M_{\odot}$) | (mas) | (years) | (days) | Single-visit | Orbitald | (mas) | (mas) | False Pos. | False Neg. | False Pos. | False Neg. tau Cetibc | 3.7 | 3.5 | 0.52 | 0.80 | 187 – 329 | 0.64 – 1.5 | 1.0 | 0.67 | 0.55 | 3.2 | 5.7 | 2.2% | 2.0% | 0.0% | 12.6% Procyona | 3.5 | 0.4 | 7.1 | 1.49 | 722 – 1270 | 3.3 – 7.7 | 1.0 | 0.65 | 0.54 | 3.2 | 28.2 | 2.2% | 1.5% | 0.0% | 14.1% eps Indac | 3.6 | 4.7 | 0.23 | 0.68 | 124 – 219 | 0.36 – 0.84 | 2.5 | 0.67 | 0.54 | 2.3 | 3.5 | 1.7% | 0.5% | 0.0% | 7.7% Siriusa | 2.6 | $-$1.4 | 30.5 | 2.40 | 1993 – 3503 | 7.6 – 17.8 | 1.0 | 0.58 | 0.53 | 3.9 | 72.5 | 3.5% | 2.1% | 0.0% | 16.9% omi 2 Eric | 5.0 | 4.4 | 0.42 | 0.81 | 124 – 218 | 0.54 – 1.3 | 6.3 | 0.65 | 0.52 | 2.5 | 4.7 | 0.8% | 1.5% | 0.2% | 9.1% Altair | 5.1 | 0.8 | 10.7 | 1.83 | 605 – 1064 | 4.0 – 9.4 | 2.5 | 0.58 | 0.51 | 3.9 | 36.3 | 3.5% | 3.9% | 0.0% | 24.3% del Pav | 6.1 | 3.5 | 1.3 | 0.99 | 179 – 315 | 1.2 – 2.7 | 6.3 | 0.64 | 0.55 | 3.5 | 4.9 | 2.2% | 2.2% | 0.0% | 12.7% 82 Eric | 6.0 | 4.3 | 0.69 | 0.85 | 130 – 229 | 0.75 – 1.7 | 6.3 | 0.60 | 0.40 | 3.1 | 4.4 | 1.2% | 2.4% | 0.2% | 11.1% sig Dra | 5.8 | 4.7 | 0.44 | 0.80 | 109 – 181 | 0.56 – 1.3 | 6.3 | 0.55 | 0.42 | 3.1 | 4.5 | 1.7% | 1.1% | 0.0% | 7.2% bet Hyi | 7.5 | 2.8 | 3.7 | 1.14 | 245 – 430 | 2.3 – 5.4 | 6.3 | 0.58 | 0.40 | 3.8 | 20.9 | 4.6% | 5.2% | 0.2% | 31.7% bet CVna | 8.4 | 4.2 | 1.3 | 1.03 | 126 – 222 | 1.1 – 2.5 | 6.3 | 0.43 | 0.14 | 4.4 | 4.4 | 2.2% | 2.3% | 0.0% | 10.5% 1 Ori | 8.1 | 3.2 | 3.0 | 1.24 | 203 – 358 | 1.9 – 4.4 | 6.3 | 0.50 | 0.31 | 4.9 | 9.3 | 4.1% | 5.5% | 0.0% | 21.6% Fomalhautab | 7.7 | 1.2 | 16.5 | 2.05 | 500 – 879 | 5.3 – 12.3 | 6.3 | 0.46 | 0.44 | 4.9 | 50.5 | 4.9% | 4.0% | 0.0% | 40.7% del Eri | 9.0 | 3.5 | 3.4 | 1.19 | 193 – 339 | 2.1 – 4.9 | 6.3 | 0.46 | 0.26 | 5.3 | 11.6 | 5.7% | 6.0% | 0.2% | 30.2% gam Lep | 8.9 | 3.6 | 2.5 | 1.27 | 168 – 296 | 1.6 – 3.8 | 6.3 | 0.44 | 0.21 | 5.9 | 8.9 | 1.9% | 7.2% | 0.0% | 30.1% zet Tuc | 8.6 | 4.2 | 1.3 | 1.01 | 127 – 224 | 1.1 – 2.7 | 6.3 | 0.42 | 0.16 | 5.4 | 4.9 | 2.1% | 5.4% | 0.0% | 24.9% aafootnotetext: Binary bbfootnotetext: Known debris disk ccfootnotetext: Known to have planet(s) ddfootnotetext: Orbital completeness requires 3 detections with S/N $\geq$ 7 eefootnotetext: Astrometric precisions are medians of many sampled planetary orbits ## 3 Orbit Reconstruction Having identified the best targets for detection of Earth-like planets with their observation availability windows, we now describe our approach to orbit reconstruction. We assume the planet is observed at the beginning of each window as shown in Figure 1, which provides 4 observing epochs per target in most cases. In cases where the star has a single long availability window per year, we have set the observing times to the beginning and middle of that window. For each Monte Carlo sampled planet, we propagate its circular orbit to each of the observing epochs and calculate its illumination phase (see §2). We apply the observatory model to estimate the planet signal to noise ratio (SNR). Observations with SNR $\geq 7$ are considered to be detections. Otherwise, the observation is rejected as a non-detection. The one-dimensional astrometric uncertainty is approximated according to $\delta\theta=(65~{}\mathrm{mas})/SNR$. The median astrometric precision for each star ranges from 2.3 to 5.9 mas (see Table 3). The simulated data are created by taking the true position of the planet and adding two-dimensional Gaussian scatter based on the astrometric precision, $\delta\theta$. For the orbit reconstruction, we implemented a forward modeling of Kepler’s laws, as described in Mede & Brandt (2017), into our own software package. We sample all six Keplerian parameters, also including uncertainties in the star’s mass and distance, treating them as nuisance parameters. Table 4 lists the parameters that we fit for each orbit, along with their assumed ranges and prior constraints. We use the emcee Markov chain Monte Carlo (MCMC) software package (Foreman-Mackey et al., 2013) to fit the orbit. The MCMC fitting procedure calculates the quality of fit for a series of parameter values, not only converging toward the best set of values but also finding the full parameter ranges that are consistent with the data. Periodic orbital elements ($\omega$, $\Omega$, $T_{0}$, and $i$) are modulated to stay within their prescribed bounds (typical from 0 to 2$\pi$). Table 4: Orbit Fitting Parameters Parameter | Description | Bounds | Prior Constraint ---|---|---|--- $a$ | semi-major axis | 0 – 10 AU | linear $e$ | eccentricity | 0 – 1 | uniform $\omega$ | argument of periastron | 0 – 360∘ | uniform $i$ | inclinationa | 0 – 90∘ | $\propto$ sin$(i)$ $\Omega$ | longitude of the ascending node | 0 – 360∘ | uniform $T_{0}$ | periastron phase | 0 – 360∘ | uniform $M_{\star}$ | stellar mass | 0 – 5 $M_{\odot}$ | observed value with 10% uncertainty $d_{\star}$ | distance | 0 – 20 pc | observed value with 1% uncertainty aafootnotetext: $i=90$∘ corresponds to edge-on. Three examples of the orbit reconstruction simulations are shown in Figure 2. The first panel shows Procyon, a relatively luminous star (7.1 $L_{\odot}$), meaning that its habitable zone is relatively distant both in angular scale (0.7–1.3′′) and in physical space (2.5–4.5 AU). With a mass of $\sim$1.5 $M_{\sun}$, planets in the habitable zone have relatively long periods. The randomly selected planet orbiting Procyon is detected in all four observations, but because of the long period, only a fraction of the orbit is traced. The second panel shows a planet orbiting tau Ceti that is only detected in three of the four observing epochs; during the first observation, the planet falls behind the starshade mask (the grey circle in the center of each panel), a common occurrence for planets on inclined orbits. In the third panel (sigma Dra), there are again only 3 successful observations, but in this case the planet is too faint to be detected during the first epoch due to unfavorable illumination phase. Also, because sigma Dra is near the ecliptic north pole, it has only one observing window per year (see Figure 1) and its orbital phase coverage is limited (epoch pairs 1/2 and 3/4 are within the same window). With only three closely-spaced epochs, the fit is relatively poorly constrained. An example of the MCMC posterior distributions is shown in Figure 3. For this inclined orbit, the inclination ($i$) and longitude of ascending node ($\Omega$) are well determined and are accurately retrieved. The retrieved eccentricity ($e$) is necessarily larger than the assumed circular orbit, but is still close to zero (0.03$\pm$0.02). Given the circular orbit, the true argument of periastron ($\omega$) is undefined and the retrieved value is only loosely constrained. Most importantly, the semi-major axis ($a$) is well constrained by the observations, enabling us to determine that the planet lies well within the habitable zone. Figure 2: Example fits are shown for three different stars – Procyon, tau Ceti, and sigma Dra. In each case, the planet’s semi-major axis lies within the habitable zone (dashed lines). The true orbit is shown in orange, with true positions marked as orange circles and simulated observations shown with error bars shown in red only if detected. The numbers indicate the visit number for each of the four observations. Sample best-fit orbits are shown as thin black lines. The region masked by the starshade is shown as a grey circle. Figure 3: Best-fit parameters for a random planet orbiting tau Ceti, showing the marginalized probability distributions for 3 of the 8 model parameters (top histograms) and the correlations between pairs of parameters (central panels, with $1-$, $2-$, and $3-\sigma$ contours). The retrieved parameters are consistent with the true values (blue lines), although the fit eccentricity is necessarily larger than the true circular orbit’s. For this example, the planet lies unambiguously inside the habitable zone; the distribution of semi-major axes falls entirely within tau Ceti’s 187–329 mas range. ## 4 Results For each of the 16 target stars listed in Table 3 we simulate 1000 random orbits and then extract orbital parameters as described above. Table 5 lists the number of orbital calculations for the full set of simulations (see Foreman-Mackey et al. (2013) for details on MCMC parameters). Table 5: Simulation Parameters Parameter | Quantity ---|--- Target stars | 16 Random orbits per star | 1000 MCMC iterations per orbit | 5000 MCMC walkers | 100 Our first objective is to accurately determine each planet’s semi-major axis. The ability to make this measurement depends not only on the astrometric precision for individual observations (2.8 to 5.9 mas; Table 3), but also depends critically on the orbital sampling. If planets do not trace out their full orbit during the two-year observing window, the quality of the fit is reduced. This is particularly true for stars with high luminosity, which translates to a more distant habitable zone and hence longer orbital periods. Sirius, the most luminous star in our sample (30.5 $L_{\odot}$), has the worst precision in its orbit fitting (72.5 mas), whereas eps Ind, the least luminous star (0.23 $L_{\odot}$), has the best determined orbit (3.5 mas). (Table 3 lists the median semi-major axis precision obtained for the other target stars.) Our ultimate objective is to determine whether a planet lies within its parent star’s habitable zone. The key metric for this determination is not the absolute precision, but rather the fractional precision on a planet’s semi- major axis. While the absolute precision varies between Sirius and eps Ind by a factor of 20, the fractional precision for the two is comparable (2.7% and 2.1%, respectively), since Sirius’ habitable zone is a factor of 16 larger than eps Ind’s. For the overall sample, the median fractional precision varies from 2.1% up to 7.6% for Fomalhaut. Figure 4 shows semi-major axis measurement precision versus true semi-major axis for each of the 1000 simulated planets around three of our target stars. 82 Eri (left panel) has one of the best precisions (4.4 mas median), although the performance degrades significantly for more distant orbits, where the planets are relatively faint. Fomalhaut (central panel) has one of the worst precisions (50.5 mas median), primarily because the planets in its habitable zone around this A-type star have periods considerably longer than our 2-year mission lifetime (5–12 years), such that only a fraction of each orbit is traced. The effect of limited phase coverage can be seen in Figure 5, which plots semi-major axis precision as a function of orbital period in the center of the habitable zone. Planets with periods less than our 2-year mission lifetime are well constrained, but those farther out have semi-major axis precision increasing roughly linearly with the period. While the semi-major precisions for other target stars (shown in Figure 8 in the Appendix) follow a similar pattern of smoothly decreasing precision with increasing $a_{p}$, sigma Dra (right panel in Figure 4) has an unusual bump around $a_{p}\simeq$160 mas, corresponding to orbital periods of $\sim$1 year. The decrease in performance is due to the sampling being repeated on 1-year cycles, where observations taken during the second year of the mission have about the same orbital anomaly as those taken in the first year of the mission (i.e. there is a 1-year aliasing). The resulting small range of orbital- anomaly coverage (see the right panel of Figure 2) makes it more difficult to fit the orbit. While the 1-year aliases just discussed is most pronounced for sigma Dra, several other systems exhibit a similar effect at orbital periods that match the phasing of the observations. For all of the precision plots (Figures 4 and 8), the semi-major axis corresponding to a 1-year period is indicated by a red hash mark; the other black marks correspond to other time differences between observing epochs (e.g. for 82 Eri, the first and second observations are separated by 40 days, the second and third by 325 days, and the first and fourth by 405 days; see the observing windows in Figure 1). While the 1-year aliasing generally causes the strongest effect, other orbital period/observing period alignments can also degrade performance. Figure 4: For each star (three examples shown here – 82 Eri, Fomalhaut, and sigma Dra), the orbital parameters and their uncertainties are retrieved for 1000 random planet orbits, each of which is directly imaged at least three times. The precision for measuring the semi-major axis of each planet is shown here as a function of the true semi-major axis. The habitable zone is interior to the dashed lines. The starshade masks all orbits inside of 100 mas. Hash marks at the bottom of each panel correspond to orbital periods equal to the spacing between observing epochs, with a 1-year period highlighted in red. Figure 5: Our ability to pin down a planet’s semi-major axis depends on the fraction of its orbit that is traced by the observations. Periods less than the mission lifetime (2 years) are well sampled, while those with longer periods are only observed for a partial arc, resulting in lower precision in determining the orbit. Figure 6: As in Figure 4, orbital parameters are measured for 1000 random planetary orbits around each of three stars – 82 Eri, Fomalhaut, and sigma Dra. The derived probability of each planet residing in its star’s habitable zone is is shown as a function of its true semi-major axis. The habitable zone is indicated by the dashed lines. Our MCMC fitting procedure calculates a (non-Gaussian) posterior distribution for each orbital parameter. From these distributions we derive the probability that each planet lies within its star’s habitable zone. Figure 6 shows the results for the same three target stars as in Figure 4. For 82 Eri (left panel), the orbit fitting is fairly deterministic – planets well inside the habitable zone are correctly identified as such with high probability ($>$99%), while those well outside are ruled out (probability $<$1%). As one would expect, there is some ambiguity near the edges of the habitable zone, but for the overall sample there is just a 2.4% chance of a habitable zone planet being falsely classified as falling outside the habitable zone, while 98.8% of the planets classified as residing in the habitable zone are truly habitable zone planets (i.e. a false positive rate of 1.2%). Figure 7: False positive and false negative rates for 82 Eri, Fomalhaut, and sigma Dra, shown as a function of the habitable-zone probability threshold. A more liberal threshold (lower probability) reduces the false negative rate, but increases the false positive rate. A very conservative threshold (to the right of each panel) can ensure that no false detections are made, but misses a significant number of true habitable-zone planets. A performance metric combining the two rates is given by the $F_{1}$ score, the harmonic mean of the precision (the fraction of detected habitable zone planets that truly are located in the habitable zone) and recall (the fraction of habitable zone planets that are correctly classified as such). These rates are based on a nominal classification threshold, where planets with habitable zone probability $>$50% are categorized as habitable zone planets. If a more conservative approach is desired, less planets can be included as habitable zone. If a 95% threshold is used for classification, for example, then 82 Eri will have only 0.2% false detections. However, only 88.9% of the true habitable zone planets will be included (11.1% false negative rate). For Fomalhaut (middle panel of Figure 6), the worse orbital precision translates to much more scatter in the plotted probabilities and less certainty for determining whether a planet lies in its habitable zone. Still, there is only 4.0% probability of a habitable zone planet being misclassified, and only a 4.9% chance of a habitable-zone-classified planet not being truly in the habitable zone. For the conservative (95%) classification threshold, the false positive rate falls to zero, but at the expense of only 59% of the true habitable zone planets being included (i.e. a false negative rate of 41%). The false positive and false negative rates for 82 Eri, Fomalhaut, and sigma Dra are shown in Figure 7, as a function of the classification threshold. These plots also display an overall success metric – the $F_{1}$ score, the harmonic mean of (1 - false positive rate) and (1 - false negative rate). While a nominal threshold of 50% results in an optimal balance between these two factors (i.e. it give maximum $F_{1}$ score), an emphasis on avoiding false detections would warrant a more conservative approach. The false positive/false negative rates for each star are listed in Table 3 for both the nominal classification threshold (50%) and a conservative classification threshold (95%). For the nominal threshold, the average performance for the overall sample is a 2.8% false positive rate and a 3.3% false negative rate. For the conservative threshold, the average false positive rate is just 0.05%, but the average false negative rate goes up to 19%. ## 5 Conclusions Based on a model for the Starshade Rendezvous Probe (SRP) mission concept with target-specific observing windows and SNR calculations dependent on the planet illumination during each window, we have quantified the ability of SRP to identify habitable zone planets. We find that detection of a planet in at least 3 out of the 4 observing epochs will adequately measure the planet’s semi-major axis. For a 16 star sample observed with this strategy, we find that habitable zone planets are correctly identified as such 96.7% of the time, with 2.8% contamination by false classifications. Including the full range of planet masses, the mission is expected to detect $\sim$10 planets in the vicinity of the habitable zone (Romero-Wolf et al., 2020), such that a very small number of planets (less than 1) are expected to be misclassified. Acknowledgements: Part of this work was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration. ©2020. All rights reserved. This research has made use of 1) the NASA Exoplanet Archive, which is operated by the California Institute of Technology, under contract with the National Aeronautics and Space Administration under the Exoplanet Exploration Program and 2) the SIMBAD database, operated at CDS, Strasbourg, France. ## References * Belikov & et al. (2017) Belikov, R., & et al. 2017, https://exoplanets.nasa.gov/exep/exopag/sag/#sag13 * Blunt et al. (2017) Blunt, S., Nielsen, E. L., De Rosa, R. J., et al. 2017, AJ, 153, 229, doi: 10.3847/1538-3881/aa6930 * Ertel et al. (2020) Ertel, S., Defrère, D., Hinz, P., et al. 2020, AJ, 159, 177, doi: 10.3847/1538-3881/ab7817 * Ford (2006) Ford, E. B. 2006, PASP, 118, 364, doi: 10.1086/500813 * Foreman-Mackey et al. (2013) Foreman-Mackey, D., Hogg, D. W., Lang, D., & Goodman, J. 2013, PASP, 125, 306, doi: 10.1086/670067 * Guimond & Cowan (2019) Guimond, C. M., & Cowan, N. B. 2019, AJ, 157, 188, doi: 10.3847/1538-3881/ab0f2e * Guyon et al. (2013) Guyon, O., Eisner, J. A., Angel, R., et al. 2013, ApJ, 767, 11, doi: 10.1088/0004-637X/767/1/11 * Horning et al. (2019) Horning, A., Morgan, R., & Nielson, E. 2019, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 11117, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, 111171C, doi: 10.1117/12.2529741 * Kasting et al. (1993) Kasting, J. F., Whitmire, D. P., & Reynolds, R. T. 1993, Icarus, 101, 108, doi: 10.1006/icar.1993.1010 * Leinert et al. (1998) Leinert, C., Bowyer, S., Haikala, L. K., et al. 1998, A&AS, 127, 1, doi: 10.1051/aas:1998105 * Mawet et al. (2019) Mawet, D., Hirsch, L., Lee, E. J., et al. 2019, AJ, 157, 33, doi: 10.3847/1538-3881/aaef8a * Mede & Brandt (2017) Mede, K., & Brandt, T. D. 2017, AJ, 153, 135, doi: 10.3847/1538-3881/aa5e4a * Robinson & Reinhard (2018) Robinson, T. D., & Reinhard, C. T. 2018, arXiv e-prints, 1804, arXiv:1804.04138. https://arxiv.org/abs/1804.04138 * Rogers (2015) Rogers, L. A. 2015, ApJ, 801, 41, doi: 10.1088/0004-637X/801/1/41 * Romero-Wolf et al. (2020) Romero-Wolf, A., Bryden, G., Seager, S., et al. 2020 * Seager et al. (2019) Seager, S., Kasdan, J., Romero-Wolf, A., et al. 2019, Starshade Rendezvous Probe, https://smd-prod.s3.amazonaws.com/science-red/s3fs-public/atoms/files/Starshade2.pdf * Stark et al. (2016) Stark, C. C., Shaklan, S., Lisman, D., et al. 2016, Journal of Astronomical Telescopes, Instruments, and Systems, 2, 041204, doi: 10.1117/1.JATIS.2.4.041204 * Turnbull et al. (2012) Turnbull, M. C., Glassman, T., Roberge, A., et al. 2012, PASP, 124, 418, doi: 10.1086/666325 * Zahnle & Catling (2017) Zahnle, K. J., & Catling, D. C. 2017, ApJ, 843, 122, doi: 10.3847/1538-4357/aa7846 ## Appendix A Additional figures Figure 8: For each of our 16 target stars, the orbital parameters are retrieved for 1000 random planet orbits, each of which is directly imaged at least three times. The precision for measuring each planet’s semi-major axis is shown here. Figure 9: For each target star, orbital parameters and their uncertainties are measured for 1000 random planetary orbits. The probability of each planet residing in its star’s habitable zone is is shown as a function of its true semi-major axis.
8k
arxiv_papers
2101.01280
# Generalized Spatio-Temporal RNN Beamformer for Target Speech Separation ###### Abstract Although the conventional mask-based minimum variance distortionless response (MVDR) could reduce the non-linear distortion, the residual noise level of the MVDR separated speech is still high. In this paper, we propose a spatio- temporal recurrent neural network based beamformer (RNN-BF) for target speech separation. This new beamforming framework directly learns the beamforming weights from the estimated speech and noise spatial covariance matrices. Leveraging on the temporal modeling capability of RNNs, the RNN-BF could automatically accumulate the statistics of the speech and noise covariance matrices to learn the frame-level beamforming weights in a recursive way. An RNN-based generalized eigenvalue (RNN-GEV) beamformer and a more generalized RNN beamformer (GRNN-BF) are proposed. We further improve the RNN-GEV and the GRNN-BF by using layer normalization to replace the commonly used mask normalization on the covariance matrices. The proposed GRNN-BF obtains better performance against prior arts in terms of speech quality (PESQ), speech-to- noise ratio (SNR) and word error rate (WER). Index Terms: MVDR, Spatio-temporal RNN beamformer, ADL-MVDR, GEV, GRNN-BF, speech separation ## 1 Introduction The mask-based MVDR [1, 2, 3, 4, 5, 6] could achieve less non-linear distortion than the existing purely “black box” neural network (NN) based speech separation methods [11, 7, 8, 9, 10]. However, the residual noise level of mask-based MVDR method is still high [12, 6]. Most of mask-based beamformers are optimized in the chunk-level [1, 3, 5, 6]. The calculated beamforming weights are hence chunk-level which is not optimal for each frame. Furthermore, the matrix inversion in the traditional beamformer (e.g., MVDR) comes with the numerical instability problem [13, 14, 15, 16], which is caused by the singularity in the matrix inversion [13]. Although this issue could be alleviated by using some techniques, e.g., diagonal loading [17, 14, 13], it is not fully solved. This problem could be worse in the end-to-end joint training system [13, 6]. Time-varying mask-based beamformers were investigated in [18, 19], however they also have the numerical instability problem [13, 14]. The recurrent neural network (RNN) was once proved to have the ability to solve the matrix inversion [20, 21] and the eigenvalue decomposition problems [22, 23], which are two main matrix operations in most of the beamfomers’ solutions, e.g., MVDR [3, 5, 24] and Generalized Eigenvalue (GEV) beamformer [25, 26]. Although different types of beamformers [24, 3, 25, 27] have different optimization and constraint conditions, most of their solutions are derived from the estimated speech and noise covariance matrices. These prior studies inspired us to use RNNs to directly learn the beamforming weights from the estimated speech and noise covariance matrices. Hence, we recently proposed an all-deep-learning MVDR (ADL-MVDR) method [28] which was superior to the traditional MVDR beamformer [3, 6]. In the ADL-MVDR [28], the matrix inversion and principal component analysis (PCA) operations of traditional MVDR are replaced by two RNNs with the estimated speech and noise covariance matrices as the input. In this work, we propose more advanced and generalized RNN-based beamformers (RNN-BFs). Note that there were also several other learning based beamforming methods [29, 30] which yielded worse performance than the traditional mask-based MVDR [3] approach due to the lack of explicitly using the speech and noise covariance matrices information [29]. In this work, three contributions are made to further improve the ADL-MVDR [28] beamformer. First, we propose a RNN-based GEV (RNN-GEV) beamformer, where it achieves slightly better performance than the ADL-MVDR [28]. It indicates that the RNNs could also be incorporated into other traditional beamforming algorithms. Second, a generalized RNN beamformer (GRNN-BF) is proposed and it is superior to the RNN-GEV and the ADL-MVDR [28]. The GRNN-BF directly learns the frame-level beamforming weights from covariance matrices without following conventional beamformers’ solutions. It suggests that the GRNN-BF could learn a better beamforming solution by automatically accumulating the covariance matrices across history frames. Finally, the layer normalization [31] is proposed to replace the commonly used mask normalization [32, 29, 6, 13] on the covariance matrices. The layer normalization is more flexible than the mask normalization and it can achieve better performance. These improvements make our proposed GRNN-BF perform the best in terms of PESQ, SNR and word error rate (WER) comparing to the traditional MVDR beamformers [6] and the ADL-MVDR beamformer [28]. The rest of this paper is organized as follows. In section 2, traditional mask-based beamformers are described. Section 3 presents the proposed generalized RNN-based beamformers (GRNN-BFs). The experimental setups and results are provided in Section 4 and 5, respectively. Section 6 concludes this paper. Figure 1: The system framework includes the dilated Conv-1D blocks based complex-valued ratio filter (cRF) estimator and the proposed spatio-temporal RNN beamformer (RNN-BF). The cRF estimator is actually a Conv-TasNet variant [8] with a fixed STFT encoder [33]). $\odot$ indicates the complex-domain multiplication to estimate the multi-channel speech $\hat{\mathbf{S}}$ and noise $\hat{\mathbf{N}}$ through cRF (as shown in Eq. (5)). $\otimes$ is the matrix multiplication of beamforming (see Eq. (13)). The speech covariance matrix ${\bf{{\Phi}}}_{\textbf{S}}$, noise covariance matrix ${\bf{{\Phi}}}_{\textbf{N}}$ and beamforming weights w are complexed-valued variables, their real parts and imaginary parts are reshaped and concatenated together. The time-domain scale-invariant SNR (Si-SNR) loss [8] is applied for end-to-end training. ## 2 Traditional mask-based beamformers This section describes the traditional mask-based beamformers. Given the $M$-channel speech mixture $\textbf{y}=[\textbf{y}_{1},\textbf{y}_{2},...,\textbf{y}_{M}]$, the corresponding $M$-channel target speaker’s speech and the noise (the sum of interfering speakers’ speech and background noise) waveforms are denoted as s and n, respectively. After applying short-time Fourier transform (STFT), we have $\textbf{Y},\textbf{S},\textbf{N}$ in the time-frequency (T-F) domain, $\textbf{Y}(t,f)=\textbf{S}(t,f)+\textbf{N}(t,f)$ (1) where $(t,f)$ indicates the time and frequency indices of the T-F domain variables. In conventional mask-based beamforming [1, 3, 5, 6], a neural network is used to predict the real-valued speech mask $\text{RM}_{\textbf{S}}$ and the real-valued noise mask $\text{RM}_{\textbf{N}}$. Then the speech covariance matrix ${\bf{{\Phi}}}_{\textbf{S}}$ is calculated with the predicted speech mask $\text{RM}_{\textbf{S}}$, ${\bf{{\Phi}}}_{\textbf{S}}(f)=\frac{\sum_{t=1}^{T}{\text{RM}}^{2}_{\textbf{S}}(t,f)\textbf{Y}(t,f)\textbf{Y}^{\sf H}(t,f)}{\sum_{t=1}^{T}{\text{RM}}^{2}_{\textbf{S}}(t,f)}$ (2) Where $T$ stands for the total number of frames in a chunk. $\sf H$ is the Hermitian transpose. The noise covariance matrix ${\bf{{\Phi}}}_{\textbf{N}}$ could be calculated in the same way with the noise mask $\text{RM}_{\textbf{N}}$. The MVDR solution [3] can be derived as, $\small\mathbf{w}_{\text{MVDR}}(f)=\frac{\mathbf{\Phi}_{\mathbf{N}}^{-1}(f)\mathbf{\bm{v}}(f)}{\mathbf{\bm{v}^{\sf H}}(f)\mathbf{\Phi}_{\mathbf{N}}^{-1}(f)\mathbf{\bm{v}}(f)},\quad\mathbf{w}_{\text{MVDR}}(f)\in\mathbb{C}^{M}$ (3) where $\mathbf{\bm{v}}(f)$ represents the steering vector at $f$-th frequency bin. $\mathbf{\bm{v}}(f)$ could be derived by applying PCA on ${\bf{{\Phi}}}_{\textbf{S}}(f)$, namely $\mathbf{\bm{v}}(f)=\mathcal{P}\\{{\bf{{\Phi}}}_{\textbf{S}}(f)\\}$. Another type of commonly used beamformer is generalized eigenvalue (GEV) [1], where its optimal solution is the generalized principle component [1, 26], $\mathbf{w}_{\text{GEV}}(f)=\mathcal{P}\\{{\bf{{\Phi}}}^{-1}_{\textbf{N}}(f){\bf{{\Phi}}}_{\textbf{S}}(f)\\},\quad\mathbf{w}_{\text{GEV}}(f)\in\mathbb{C}^{M}$ (4) However, the beamforming weights $\mathbf{w}$ above are usually chunk-level [1, 3, 5, 6], which is not optimal for each frame. Furthermore, the matrix inversion involved in Eq. (3) and Eq. (4) has the numerical instability problem [13, 14, 15, 16]. Note that we have already applied the diagonal loading technique [17, 14, 13] to alleviate this problem in our MVDR baselines. On the other hand, although the MVDR and GEV are two different beamformers, their solutions are both derived from the speech and noise covariance matrices, namely ${\bf{{\Phi}}}_{\textbf{S}}$ and ${\bf{{\Phi}}}_{\textbf{N}}$. This is also our motivation to use RNNs to directly learn the beamforming weights from ${\bf{{\Phi}}}_{\textbf{S}}$ and ${\bf{{\Phi}}}_{\textbf{N}}$. ## 3 Proposed generalized RNN beamformer We aim to extract the target speaker’s speech from the multi-channel multi- talker overlapped mixture. As shown in Fig. 1, the whole system consists of a complex-valued ratio filter (cRF) estimator and the proposed spatio-temporal RNN beamformer. The predicted cRFs are used to calculate the covariance matrices. Then the proposed GRNN-BF learns the beamforming weights from the covariance matrices. The details of the input features and the cRF estimator will be described in Sec. 4. Our proposed GRNN-BF will be first illustrated here. Recently we proposed the ADL-MVDR method [28] which is superior to the traditional mask-based MVDR beamformers [3, 5, 6]. The ADL-MVDR uses two RNNs to replace the matrix inversion and PCA in MVDR solution (defined in Eq. (3)). Here we explore to use RNNs to implement the GEV beamformer (defined in Eq. (4)) and another more generalized RNN beamformer. Layer normalization [31] is also proposed to replace the commonly used mask normalization [32, 29, 13], which is applied on the covariance matrices. ### 3.1 Layer normalization on covariance matrix Before we use RNNs to learn the beamforming weights, the speech and noise covariance matrices should be first estimated. Real-valued masks [32], complex-valued ratio mask (cRM) [34, 6] or complex-valued ratio filter (cRF) [35, 28] could be used to estimate the speech and noise. In our previous ADL- MVDR work [28], the cRF [35] is demonstrated to be better than the cRM [34]. The cRF [35] is $K\times K$ size cRM [34] by using nearby $K\times K$ T-F bins around $(t,f)$. With the speech $\text{cRF}_{\textbf{S}}(t,f)$, the multi- channel target speech is estimated as, $\footnotesize\hat{\mathbf{S}}(t,f)=\sum_{\tau_{1}=-K}^{\tau_{1}=K}\sum_{\tau_{2}=-K}^{\tau_{2}=K}\text{cRF}_{\textbf{S}}(t+\tau_{1},f+\tau_{2})*\mathbf{Y}(t+\tau_{1},f+\tau_{2})$ (5) Then the frame-wise speech covariance matrix is calculated as, $\small{\bf{{\Phi}}}_{\textbf{S}}(t,f)=\frac{\hat{\mathbf{S}}(t,f)\hat{\mathbf{S}}^{\sf H}(t,f)}{\sum_{t=1}^{T}\text{cRM}_{\textbf{S}}^{\sf H}(t,f)\text{cRM}_{\textbf{S}}(t,f)}$ (6) Where $\text{cRM}_{\textbf{S}}(t,f)$ is the center unit of the speech $\text{cRF}_{\textbf{S}}(t,f)$. Given the noise $\text{cRF}_{\textbf{N}}(t,f)$, the estimated multi-channel noise $\hat{\textbf{N}}(t,f)$ and the frame-wise noise covariance matrix ${\bf{{\Phi}}}_{\textbf{N}}(t,f)$ could be estimated in the same way. Different from Eq. (2) where the covariance matrix are averaged over a chunk of frames in the traditional mask-based MVDR, the covariance matrices here are frame-wise. This is because the covariance matrices are later fed into RNNs where the unidirectional RNN could automatically accumulate the statistics of covariance matrices across history frames in a recursive way. Note that the denominator in Eq. (6) is the commonly used mask normalization [32, 29, 6, 13] to normalize the covariance matrix. In this work, we propose to use the layer normalization [31] to normalize the covariance matrices to achieve better performance. $\small{\bf{{\Phi}}}_{\textbf{S}}(t,f)=\text{LayerNorm}(\hat{\mathbf{S}}(t,f)\hat{\mathbf{S}}^{\sf H}(t,f))$ (7) Where the layer normalization [31] applies per-element scale and bias with learnable affine transform, which is more flexible than the mask normalization. Another layer normalization is also adopted for ${\bf{{\Phi}}}_{\textbf{N}}(t,f)$. ### 3.2 Spatio-temporal RNN GEV beamformer Similar to the ADL-MVDR [28], here the proposed spatio-temporal RNN GEV beamformer (RNN-GEV) also takes the estimated target speech covariance matrix ${\bf{{\Phi}}}_{\textbf{S}}(t,f)$ and the noise covariance matrix ${\bf{{\Phi}}}_{\textbf{N}}(t,f)$ as the input to predict the frame-wise beamforming weights. Following the solution of the traditional GEV beamformer defined in Eq. (4), we reformulate its form in the RNN-based beamforming framework as, $\displaystyle\small{\bf{\hat{\Phi}}}_{\textbf{N}}^{-1}(t,f)$ $\displaystyle=\text{RNN}({\bf{{\Phi}}}_{\textbf{N}}(t,f))$ (8) $\displaystyle\small{\bf{{\hat{\Phi}}}}_{\textbf{S}}(t,f)$ $\displaystyle=\text{RNN}({\bf{{\Phi}}}_{\textbf{S}}(t,f))$ (9) $\displaystyle\small\mathbf{w}_{\text{RNN-GEV}}(t,f)$ $\displaystyle=\text{DNN}({{\bf{\hat{\Phi}}}_{\textbf{N}}^{-1}(t,f)\bf{{\hat{\Phi}}}}_{\textbf{S}}(t,f))$ (10) $\displaystyle\small\hat{\mathbf{S}}(t,f)$ $\displaystyle=(\mathbf{w}_{\text{RNN-GEV}}(t,f))^{\sf H}\mathbf{Y}(t,f)$ (11) where $\mathbf{w}_{\text{RNN-GEV}}(t,f)\in\mathbb{C}^{M}$. ${\bf{{\hat{\Phi}}}}_{\textbf{S}}(t,f)$ is the accumulated speech covariance matrix from the history frames by leveraging on the temporal modeling capability of RNNs. ${\bf{\hat{\Phi}}}_{\textbf{N}}^{-1}(t,f)$ is assumed to be the matrix inversion of ${\bf{\Phi}}_{\textbf{N}}(t,f)$. Instead of using the actual generalized PCA (as in Eq. (4)), a deep neural network (DNN) is utilized to calculate the beamforming weights for RNN-GEV. Hinton et al [36] shows that the DNN has the ability to conduct the non-linear generalized PCA. ### 3.3 Generalized spatio-temporal RNN beamformer Finally, we propose a more generalized spatio-temporal RNN beamformer (GRNN- BF) without following any traditional beamformers’ solutions. This is motivated by that, different beamformers (e.g., MVDR and GEV) have different solutions but almost all solutions are derived from the speech and noise covariance matrices. The neural networks could be able to learn a better solution from the speech and noise covariance matrices. The RNN-GEV and the ADL-MVDR [28] both have two RNNs to deal with the target speech covariance matrix $\bf{\hat{\Phi}}_{\textbf{S}}(t,f)$ and the noise covariance matrix ${\bf{{\hat{\Phi}}}}_{\textbf{N}}(t,f)$, respectively. But GRNN-BF here uses only one unified RNN-DNN model to predict the frame-level beamforming weights directly. $\displaystyle\small\mathbf{w}_{\text{GRNN-BF}}(t,f)$ $\displaystyle=\text{RNN- DNN}([{{\bf{\Phi}}_{\textbf{N}}(t,f),\bf{{\Phi}}}_{\textbf{S}}(t,f)])$ (12) $\displaystyle\hat{\mathbf{S}}(t,f)$ $\displaystyle=(\mathbf{w}_{\text{GRNN- BF}}(t,f))^{\sf H}\mathbf{Y}(t,f)$ (13) where $\mathbf{w}_{\text{GRNN-BF}}(t,f)\in\mathbb{C}^{M}$. The input for the RNN-DNN is the concatenated tensor of ${\bf{\Phi}_{\textbf{N}}}(t,f)$ and ${\bf{\Phi}_{\textbf{S}}}(t,f)$. All of the covariance matrices and beamforming weights are complex-valued, and we concatenate the real and imaginary parts of any complex-valued tensors in the whole work. ## 4 Dataset and experimental setup Table 1: PESQ, Si-SNR(dB) [8], SDR(dB) and WER($\%$) results among Conv-TasNet with STFT [33], several MVDRs and proposed GRNN-BF systems. ”MN” and ”LN” denote mask normalization and layer normalization on the covariance matrices, respectively. systems/metrics | PESQ [-0.5, 4.5] | Si-SNR | SDR | WER ---|---|---|---|--- | Angle between target & others | # of overlapped speakers | | | | 0-15 | 15-45 | 45-90 | 90-180 | 1SPK | 2SPK | 3SPK | Avg. | Avg. | Avg. | Avg. Reverberant clean reference | 4.50 | 4.50 | 4.50 | 4.50 | 4.50 | 4.50 | 4.50 | 4.50 | $\infty$ | $\infty$ | 8.26 Mixture (overlapped speech+noise) | 1.88 | 1.88 | 1.98 | 2.03 | 3.55 | 2.02 | 1.77 | 2.16 | 3.39 | 3.50 | 55.14 Conv-TasNet with STFT (i) [33] | 2.75 | 2.95 | 3.12 | 3.09 | 3.98 | 3.06 | 2.76 | 3.10 | 12.50 | 13.01 | 22.07 MVDR w/ MN (ii) [6] | 2.55 | 2.77 | 2.96 | 2.89 | 3.82 | 2.90 | 2.55 | 2.92 | 11.31 | 12.58 | 15.91 Multi-tap MVDR w/ MN (iii) [6] | 2.67 | 2.95 | 3.15 | 3.10 | 3.92 | 3.06 | 2.72 | 3.08 | 12.66 | 14.04 | 13.52 ADL-MVDR w/ MN (iv) [28] | 3.04 | 3.30 | 3.48 | 3.48 | 4.17 | 3.41 | 3.07 | 3.42 | 14.80 | 15.45 | 12.73 Prop. RNN-GEV w/ MN (v) | 3.11 | 3.36 | 3.55 | 3.54 | 4.19 | 3.48 | 3.14 | 3.48 | 15.34 | 15.88 | 12.07 Prop. RNN-GEV w/ LN (vi) | 3.15 | 3.39 | 3.57 | 3.56 | 4.19 | 3.51 | 3.17 | 3.51 | 15.55 | 16.07 | 11.75 Prop. GRNN-BF w/ MN (vii) | 3.17 | 3.40 | 3.58 | 3.59 | 4.21 | 3.53 | 3.19 | 3.52 | 15.48 | 16.03 | 11.86 Prop. GRNN-BF w/ LN (viii) | 3.23 | 3.45 | 3.62 | 3.60 | 4.23 | 3.57 | 3.24 | 3.56 | 15.84 | 16.38 | 11.36 Dataset: The methods are evaluated on the mandarin audio-visual corpus [37, 33], which is collected from YouTube [38]. The dataset has 205500 clean speech segments (about 200 hours) over 1500 speakers. The audio sampling rate is 16 kHz. 512-point of STFT is used to extract features along 32ms Hann window with 50% overlap. There are one to three overlapped speaking speakers in the simulated 15-channel mixture signal. The signal-to-interference ratio (SIR) is ranging from -6 to 6 dB. Noise with 18-30 dB SNR is added to all the 15-channel mixtures [37]. We use a 15-element non-uniform linear array. Based on the image-source simulation method [39], the simulated dataset contains 190000, 15000 and 500 multi-channel mixtures for training, validation and testing. The virtual acoustic room size is ranging from 4m-4m-2.5m to 10m-8m-6m. The reverberation time T60 is sampled in a range of 0.05s to 0.7s. cRF estimator: we use the complex-valued ratio filter (cRF) [28, 35] to estimate the covariance matrices. As shown in Fig. 1, the input to the cRF estimator includes a 15-channel mixture audio and a target direction of arrival (DOA) ($\theta$). From the multi-channel audio, log-power spectra (LPS) and interaural phase difference (IPD) [37] features are extracted. For the simulated data, the ground-truth target DOA is known. For the real-world scenario, we have the hardware where the $180^{\circ}$ wide-angle camera and the 15-linear microphone array are aligned [6]. Hence the target DOA ($\theta$) could be roughly estimated from the camera view by locating the target speaker’s face (see our actual hardware demo website: https://yongxuustc.github.io/grnnbf). Then the DOA guided directional feature (DF) [41], namely $d(\theta)$, is estimated by calculating the cosine similarity between the target steering vector $\mathbf{\bm{v}}$ and IPDs [41, 33]. Target DOA and $d(\theta)$ are speaker-dependent features which can be used to extract the target speech. LPS, IPD and $d(\theta)$ are merged and fed into a Conv-TasNet variant [8] with a fixed STFT encoder [37, 33]. A stack of eight successive dilated Conv-1D layers with 256 channels forms a network block and four blocks are piled together. The estimated cRF [35] size ($K\times K$) is empirically set to 3x3 [35, 28]. As for the RNN-BF module, the RNNs have 2-layer gated recurrent units (GRUs) with 500 hidden nodes. The non-linear DNN layer has 500 PReLU units. There are 30 linear units at the output DNN layer to predict the frame-wise beamforming weights. The model is trained in a chunk-wise mode with 4-second chunk size, using Adam optimizer. Initial learning rate is set to 1e-4. The objective is to maximize the time-domain scale-invariant source-to-noise ratio (Si-SNR) [8]. Pytorch 1.1.0 was used. Gradient norm is clipped with max norm 10. We evaluate the systems by using different metrics, including PESQ, Si-SNR (dB), signal-to-distortion ratio (SDR) (dB). A commercial general-purpose mandarin speech recognition Tencent API [40] is used to test the ASR performance in WER. Note this work only focuses on speech separation and denoising without dereverberation. Hence the reverberant clean (without dereverberation) is used as the reference signal. ## 5 Results and discussions We evaluate the target speech separation performance in the overlapped multi- talker scenario. The spatial angle between the target speaker and others (interfering speakers) lies within 0-180∘. The more overlapped speakers and the smaller spatial angle would lead to more challenging separation tasks. The detailed PESQ scores across different scenarios (i.e., angle between the target speaker and other speakers; number of overlapped speakers) are presented in Table 1. Other metrics are given with the average results. GRNN-BF vs. traditional MVDRs: Two traditional MVDR systems, MVDR with mask normalization (ii) [6, 28] and multi-tap (i.e., [$t-1,t$]) MVDR with mask normalization (iii) [6, 28] are compared here. They also use the cRF estimator to calculate covariance matrices but replace the RNN-BF module (as shown in Fig. 1) with conventional MVDR or multi-tap MVDR [6] solutions. They both work reasonably well, e.g., the multi-tap MVDR (iii) achieves 13.52% WER. However, the proposed GRNN-BF with mask normalization (vii) could obtain significantly better performance. The proposed GRNN-BF (vii) increases the average PESQ to 3.52 from 3.08 of multi-tap MVDR (iii) and 2.92 of MVDR (ii). The WER of the proposed GRNN-BF (vii) is better than the multi-tap MVDR (iii), i.e., 11.86 vs. 13.52. The corresponding Si-SNR and SDR are increased to 15.48 dB and 16.03 dB, respectively. Fig. 2 also shows that the proposed GRNN-BF could estimate the spectra with less residual noise than traditional MVDR. The traditional MVDR has limited noise reduction capability [12, 6]. Finally, the proposed GRNN-BF with layer normalization (viii) achieves the best performance among all systems across all metrics. Moreover, the PESQ scores of our proposed GRNN-BF (viii) are above 3.2 at all scenarios, especially the two most difficult cases, namely small angle ($<$15∘) and three overlapped speakers (3SPK). GRNN-BF vs. RNN-MVDR/GEV: Our proposed RNN-GEV uses RNNs to implement the GEV beamformer following Eq. (4) while the ADL-MVDR [28] following Eq. (3). With a more flexible structure (as shown in Sec. 3.2), the proposed RNN-GEV (v) is slightly better than the ADL-MVDR (iv), e.g., PESQ: 3.48 vs. 3.42; WER: 12.07 vs. 12.73. However, the proposed GRNN-BF (vii) is better than both of them. Compared to the ADL-MVDR (iv), the proposed GRNN-BF (vii) further improves the average PESQ from 3.42 to 3.52 and the average Si-SNR from 14.80 dB to 15.48 dB. Fig. 2 also shows that the proposed GRNN-BF can enhance the spectrogram with less residual noise than the ADL-MVDR. These results suggest that there is no need to follow any beamformers’ solutions. The RNNs could learn a better solution from the speech and noise covariance matrices directly. The layer normalization is better than the mask normalization to normalize covariance matrices for both of the proposed RNN-GEV (vi) and GRNN-BF (viii). GRNN-BF vs. Conv-TasNet: Conv-TasNet with a fixed STFT encoder [33] is our cRF estimator (as shown in Fig. 1), which is a variant of the original Conv-TasNet [8]. It is a purely “black-box” neural network system with the same multi- channel input. It predicts the target speech as Eq. (5) defined. The proposed GRNN-BF with layer normalization (viii) beats the Conv-TasNet with STFT (i) by a large margin, i.e, PESQ: 3.56 vs. 3.10; Si-SNR: 15.84 vs. 12.50; WER: 11.36 vs. 22.07. The Conv-TasNet results in the worst WER 22.07% among all systems due to the non-linear distortion which is quite common in most of purely neural network based speech separation systems [11, 42, 6]. This non-linear distortion can also be found in the separated spectrogram of Conv-TasNet in Fig. 2. Layer normalization vs. mask normalization: As the denominator defined in Eq. (6), the mask normalization [32, 29, 6, 13] on the covariance matrix are always applied to stabilize the training. However, the proposed layer normalization on the covariance matrix (as defined in Eq. (7)) is more flexible than the mask normalization. The proposed GRNN-BF with layer normalization (viii) obtains better performance than the GRNN-BF with mask normalization (vii), e.g., WER: 11.36 vs. 11.86; PESQ: 3.52 vs. 3.56. Figure 2: Sample separated spectrograms of different target speech separation systems. More testing demos (including real-world recording demos to verify the generalization capability) could be found at: https://yongxuustc.github.io/grnnbf. ## 6 Conclusions In summary, we proposed a generalized RNN beamformer (GRNN-BF) that learns the frame-level beamforming weights directly from the estimated speech and noise covariance matrices. The layer normalization achieves better performance than the mask normalization for normalizing the speech and noise covariance matrices. The proposed GRNN-BF with layer normalization achieves the best objective scores (PESQ, Si-SNR, SDR) and the lowest WER among all evaluated systems. It achieves relative 10.8% and 16% WER reduction against the prior art methods, ADL-MVDR [28] and the conventional multi-tap MVDR [6], respectively. Although we only tested the proposed GRNN-BF on the DOA-guided target speech separation task, it could also be used for general scenarios without DOA information, e.g., the multi-channel speech enhancement task or the permutation invariant training (PIT) [9] based multi-channel speech separation task. ## References * [1] J. Heymann, L. Drude, and et al., “Neural network based spectral mask estimation for acoustic beamforming,” in _ICASSP_ , 2016. * [2] Z.-Q. Wang and D. Wang, “Mask weighted stft ratios for relative transfer function estimation and its application to robust asr,” in _ICASSP_ , 2018, pp. 5619–5623. * [3] H. Erdogan, J. R. Hershey, and et al., “Improved MVDR beamforming using single-channel mask prediction networks.” in _Interspeech_ , 2016. * [4] J. Heymann, L. Drude, and et al., “Beamnet: End-to-end training of a beamformer-supported multi-channel ASR system,” in _ICASSP_ , 2017. * [5] X. Xiao, S. Zhao, and et al., “On time-frequency mask estimation for MVDR beamforming with application in robust speech recognition,” in _ICASSP_ , 2017. * [6] Y. Xu, M. Yu, and et al., “Neural spatio-temporal beamformer for target speech separation,” _Interspeech_ , 2020. * [7] Y. Wang, A. Narayanan, and D. Wang, “On training targets for supervised speech separation,” _IEEE/ACM transactions on audio, speech, and language processing_ , vol. 22, no. 12, pp. 1849–1858, 2014. * [8] Y. Luo and N. Mesgarani, “Conv-tasnet: Surpassing ideal time-frequency magnitude masking for speech separation,” _IEEE/ACM transactions on audio, speech, and language processing_ , vol. 27, no. 8, pp. 1256–1266, 2019\. * [9] D. Yu, M. Kolbæk, and et al., “Permutation invariant training of deep models for speaker-independent multi-talker speech separation,” in _ICASSP_ , 2017. * [10] Y. Xu, J. Du, and et al., “A regression approach to speech enhancement based on deep neural networks,” _IEEE/ACM transactions on audio, speech, and language processing_ , vol. 23, no. 1, pp. 7–19, 2014. * [11] J. Du, Q. Wang, and et al., “Robust speech recognition with speech enhanced deep neural networks,” in _Interspeech_ , 2014. * [12] E. A. Habets and J. Benesty, “A two-stage beamforming approach for noise reduction and dereverberation,” _IEEE Transactions on Audio, Speech, and Language Processing_ , vol. 21, no. 5, pp. 945–958, 2013. * [13] W. Zhang, C. Boeddeker, S. Watanabe, and et al., “End-to-end dereverberation, beamforming, and speech recognition with improved numerical stability and advanced frontend,” _arXiv preprint arXiv:2102.11525_ , 2021. * [14] S. Chakrabarty and E. A. Habets, “On the numerical instability of an LCMV beamformer for a uniform linear array,” _IEEE Signal Processing Letters_ , vol. 23, no. 2, pp. 272–276, 2015. * [15] C. Y. Lim, C.-H. Chen, and W.-Y. Wu, “Numerical instability of calculating inverse of spatial covariance matrices,” _Statistics & Probability Letters_, vol. 129, pp. 182–188, 2017. * [16] S. Zhao and D. L. Jones, “A fast-converging adaptive frequency-domain MVDR beamformer for speech enhancement,” in _Interspeech_ , 2012. * [17] X. Mestre and M. A. Lagunas, “On diagonal loading for minimum variance beamformers,” in _Proceedings of the 3rd IEEE International Symposium on Signal Processing and Information Technology_ , 2003, pp. 459–462. * [18] Z.-Q. Wang, H. Erdogan, and et al., “Sequential multi-frame neural beamforming for speech separation and enhancement,” _arXiv preprint arXiv:1911.07953_ , 2019. * [19] Y. Kubo, T. Nakatani, and et al., “Mask-based MVDR beamformer for noisy multisource environments: introduction of time-varying spatial covariance model,” in _ICASSP_ , 2019. * [20] J. Wang, “A recurrent neural network for real-time matrix inversion,” _Applied Mathematics and Computation_ , vol. 55, no. 1, pp. 89–100, 1993\. * [21] Y. Zhang and S. S. Ge, “Design and analysis of a general recurrent neural network model for time-varying matrix inversion,” _IEEE Transactions on Neural Networks_ , vol. 16, no. 6, pp. 1477–1490, 2005. * [22] L. Liu, H. Shao, and et al., “Recurrent neural network model for computing largest and smallest generalized eigenvalue,” _Neurocomputing_ , vol. 71, no. 16-18, pp. 3589–3594, 2008. * [23] X. Wang, M. Che, and et al., “Recurrent neural network for computation of generalized eigenvalue problem with real diagonalizable matrix pair and its applications,” _Neurocomputing_ , vol. 216, pp. 230–241, 2016. * [24] J. Benesty, J. Chen, and Y. Huang, _Microphone array signal processing_. Springer Science & Business Media, 2008, vol. 1. * [25] J. Heymann, L. Drude, A. Chinaev, and R. Haeb-Umbach, “BLSTM supported GEV beamformer front-end for the 3rd CHiME challenge,” in _ASRU_ , 2015, pp. 444–451. * [26] F. Grondin, J.-S. Lauzon, and et al., “GEV beamforming supported by DOA-based masks generated on pairs of microphones,” _arXiv preprint arXiv:2005.09587_ , 2020. * [27] T. Van den Bogaert, S. Doclo, and et al., “Speech enhancement with multichannel wiener filter techniques in multimicrophone binaural hearing aids,” _The Journal of the Acoustical Society of America_ , vol. 125, no. 1, pp. 360–371, 2009. * [28] Z. Zhang, Y. Xu, and et al., “ADL-MVDR: All deep learning MVDR beamformer for target speech separation,” _ICASSP_. * [29] X. Xiao, C. Xu, and et al., “A study of learning based beamforming methods for speech recognition,” in _CHiME 2016 workshop_ , 2016. * [30] Z. Meng, S. Watanabe, and et al., “Deep long short-term memory adaptive beamforming networks for multichannel robust speech recognition,” in _ICASSP_ , 2017. * [31] J. L. Ba, J. R. Kiros, and G. E. Hinton, “Layer normalization,” _arXiv preprint arXiv:1607.06450_ , 2016. * [32] C. Boeddeker, H. Erdogan, T. Yoshioka, and R. Haeb-Umbach, “Exploring practical aspects of neural mask-based beamforming for far-field speech recognition,” in _ICASSP_ , 2018, pp. 6697–6701. * [33] R. Gu, S.-X. Zhang, and et al., “Multi-modal multi-channel target speech separation,” _IEEE Journal of Selected Topics in Signal Processing_ , vol. 14, no. 3, pp. 530–541, 2020. * [34] D. S. Williamson, Y. Wang, and et al., “Complex ratio masking for monaural speech separation,” _IEEE/ACM transactions on audio, speech, and language processing_ , vol. 24, no. 3, pp. 483–492, 2015. * [35] W. Mack and E. A. Habets, “Deep filtering: Signal extraction and reconstruction using complex time-frequency filters,” _IEEE Signal Processing Letters_ , vol. 27, pp. 61–65, 2019. * [36] G. E. Hinton and R. R. Salakhutdinov, “Reducing the dimensionality of data with neural networks,” _Science_ , vol. 313, no. 5786, pp. 504–507, 2006\. * [37] K. Tan, Y. Xu, and et al., “Audio-visual speech separation and dereverberation with a two-stage multimodal network,” _IEEE Journal of Selected Topics in Signal Processing_ , vol. 14, no. 3, pp. 542–553, 2020. * [38] S.-X. Zhang, Y. Xu, and et al., “$\mathbf{M}^{3}$: Multi-Modal Multi-channel dataset for cocktail party problems,” _in preparation_ , 2020\. * [39] E. A. Habets, “Room impulse response generator,” _Technische Universiteit Eindhoven, Tech. Rep_ , vol. 2, no. 2.4, 2006. * [40] “Tencent ASR,” https://ai.qq.com/product/aaiasr.shtml. * [41] Z. Chen, X. Xiao, and et al, “Multi-channel overlapped speech recognition with location guided speech extraction network,” in _SLT_ , 2018. * [42] Y. Luo, C. Han, and N. Mesgarani, “Distortion-controlled training for end-to-end reverberant speech separation with auxiliary autoencoding loss,” _arXiv preprint arXiv:2011.07338_ , 2020.
8k
arxiv_papers
2101.01289
# A practical approach for updating an integrity-enforced operating system Wojciech Ozga TU Dresden, Germany , Do Le Quoc TU Dresden, Germany and Christof Fetzer TU Dresden, Germany ###### Abstract. Trusted computing defines how to securely measure, store, and verify the integrity of software controlling a computer. One of the major challenge that make them hard to be applied in practice is the issue with software updates. Specifically, an operating system update causes the integrity violation because it changes the well-known initial state trusted by remote verifiers, such as integrity monitoring systems. Consequently, the integrity monitoring of remote computers becomes unreliable due to the high amount of false positives. We address this problem by adding an extra level of indirection between the operating system and software repositories. We propose a trusted software repository (TSR), a secure proxy that overcomes the shortcomings of previous approaches by _sanitizing_ software packages. Sanitization consists of modifying unsafe installation scripts and adding digital signatures in a way software packages can be installed in the operating system without violating its integrity. TSR leverages shielded execution, , Intel SGX, to achieve confidentiality and integrity guarantees of the sanitization process. TSR is transparent to package managers, and requires no changes in the software packages building and distributing processes. Our evaluation shows that running TSR inside SGX is practical; since it induces only $\sim 1.18\times$ performance overhead during package _sanitization_ compared to the native execution without SGX. TSR supports $99.76$% of packages available in the main and community repositories of Alpine Linux while increasing the total repository size by $3.6$%. trusted computing, software updates, integrity measurement architecture (IMA), intel software guard extensions ## 1\. Introduction In the last years, trusted computing (TC) technologies, such as Intel trusted execution technology (TXT) (intel_txt_whitepaper, ), integrity measurement architecture (IMA) (tcg_ima_spec, ; ima_design_2004, ), and trusted platform module (TPM) (tpm_2_0_spec_architecture, ; ibm_tpm_tss, ), have received much attention both in industry and academia because of their capacities for measuring integrity, remote attestation, and sealing. While promising at first glance, the approach of leveraging TC technologies suffers from technical issues. One of the major problems of applying them in production systems is the lack of support for operating system (OS) updates. Specifically, the security patches, which might be released frequently and installed automatically, break system integrity. We refer to integrity as a security property describing that a computer runs only expected software in the expected configuration. Figure 1. Problem of installing software updates in an integrity-enforced OS. Software updates change software integrity measurement, which is reported by the monitoring systems as integrity violation. The main question addressed in this paper: How to distinguish between software manipulated by an adversary and correctly updated software? To illustrate the problem of installing software updates, we first describe the concept of integrity verification provided by TC technologies. Verifiers (, monitoring systems (intel_secl, ; opencit_01_org, ; ibm_tpm_acs, ) or virtual private network access points (strongswan_org, )) use hardware and software technologies (intel_txt_whitepaper, ; ima_design_2004, ; intel_ptt_whitepaper_2014, ), which implement trusted computing (drtm_tcg, ; tpm_2_0_spec, ; tcg_ima_spec, ), to identify compromised (executing not allowed software) or misconfigured (having not permitted configuration) systems. In more detail, verifiers read from a remote computer a list of cryptographic hashes (measurement report) calculated over every file loaded to the computer memory since the computer boot. Verifiers detect integrity violations by comparing hashes to a whitelist, which is a list that contains hashes of approved software and configuration. Unfortunately, verifiers cannot distinguish whether software integrity changed due to malicious behavior or a legitimate software update (see Figure 1). Berger et al. 2015 (scalable_attestation, ) proposed to include in the measurement report digital signatures, which certify the integrity hashes of trusted software. The approach simplifies the verification process because verifiers require only a single certificate to check the signatures instead of a whitelist of all possible cryptographic hashes. Consequently, it opened an opportunity to support OS updates because the updates could incorporate digital signatures to vouch for the integrity of files changed during the update. OS distributions would have to change their software packaging process to issue and to insert digital signatures of files inside packages (imasig_updates, ). This approach has, however, two limitations, which we address in this paper. First, it requires changes to the existing procedures of creating packages for every OS distribution. Second, software packages contain not only files that are extracted to the filesystem but also configuration scripts that might alter OS configuration, thus breaking the integrity. Instead of modifying the well-established process of package generation (which requires approval from the entire open-source community), an alternative approach consists of creating a standalone repository with modified packages containing digital signatures (imasig_updates, ). The approach requires a trusted organization which owns a signing key and re-creates packages after injecting digital signatures. Such an organization must put additional efforts to protect the signing key and must have a good reputation to convince users to trust it. We argue that it might be difficult to achieve, considering incidents from the past, when signing keys of major Linux distribution were leaked affecting millions of users (fedore_signingkey_compromised, ; redhat_ssh_signing, ). Another problem is that an adversary controlling a repository can provide the OS with outdated packages containing known vulnerabilities (replay attack), or even prevent the OS from seeing the update (freeze attack) (cappos_look_2008, ; cappos2008package, ). The secure choice is to rely only on the _original repository_ , which is a repository managed by a trusted organization, such as an official software repository of the OS distribution. But, this approach does not tolerate the original repository failure, thus the OS must also accept _mirrors_. Mirrors store a copy of the original repository, and, in the case of open-source distributions, are hosted voluntarily. As reported by previous studies (cappos_look_2008, ), it is not difficult to create a custom mirror that becomes accepted as an official mirror. Therefore, we must tolerate that some of the available mirrors are controlled by an adversary, exposing operating systems to threats mentioned above. For example, it happened that a compromised mirror of a popular repository distributed a vulnerable version of the software, allowing an adversary to remotely access the system (compromised_mirror, ). We present the trusted software repository (TSR), an intermediate layer between the OS and the software repository that provides _sanitized_ software packages. The installation of sanitized packages causes deterministic changes to the OS configuration and filesystem. Because such changes are verifiable by monitoring systems, TSR eliminates the risk of false-positives. According to our measures, sanitization enables 99.76% of packages available in the Alpine main and community repositories to be safely installed in integrity-enforced operating systems. TSR requires zero code changes to both monitoring systems as well as operating systems. Due to the shared nature of the software repositories, we designed TSR as a service that can be hosted on the third-party resources, , in the cloud. TSR exploits trusted execution environment (TEE), , Intel software guard extensions (SGX) (mckeen_innovative_2013, ; costan2016intel, ; anati2013innovative, ), to protect the signing keys and TSR integrity. Our evaluation shows that running TSR inside SGX is practical; SGX induces in average $1.18\times$ performance overhead during sanitization, up to $1.96\times$ for packages exceeding available SGX memory. Note that the sanitization is performed in batch mode and hence, the slowdown has no practical impact. Last but not least, TSR accepts security policies, which reflect organizational-specific security requirements. Specifically, each organization defines a list of mirrors. TSR uses mirrors to establish quorum on the correct version of a software package, thus tolerating mirrors compromised by an adversary. We show that TSR requires up to 2.2 seconds to establish a quorum from official Alpine mirrors distributed over three continents. Figure 2. Overview of software update process. Colors indicate different administrative domains and are consistent across all figures. In summary, we make the following main contributions: 1. (1) We propose a practical solution to support OS updates in integrity-enforced systems, with the following properties: 1. _(a)_ The software packages are safe to install in integrity-enforced operating systems (§4.2). 2. _(b)_ Our solution is transparent to the existing software update processes and infrastructure (§4.3). 3. _(c)_ A minority of mirrors exhibiting Byzantine behavior are tolerated (§4.5). 2. (2) We realize the above-mentioned design by developing TSR— a secure proxy framework for supporting software updates in integrity-enforced operating systems (§5). 3. (3) We have evaluated TSR using a series of micro-benchmarks, and a real-world use case — Alpine Linux package updates (§6). ## 2\. Background To better understand the decisions taken in designing TSR, we start by providing background information on software update processes and about existing technologies used to collect, report, and verify system integrity. ### 2.1. OS updates Figure 2 shows a high-level overview of an OS update process: releasing, exposing, and installing new software versions. The process begins when software maintainers create a new software release that contains bug fixes or new features. The OS distribution community uses the source code of the new software release to create a software package. A software package is an archive containing software-specific files and meta-information required by the OS to install and manage the package. Packages are stored in a repository, from which end-users download them. A repository stores also a _metadata index_ that contains a digitally signed list of all packages. In this paper, we refer to a software repository controlled by an OS distribution community as an _original repository_. The original repository is a root of trust for software updates. The metadata file downloaded from the original repository provides information about the most recent versions of software available in the repository. As such, it can be used to verify that the OS is up-to-date. Repository mirrors contain a copy of the original repository. They are used to distribute the load and to decrease the latency of downloading packages. The community has limited control over the mirrors, which are typically supported by volunteer organizations. Importantly, mirrors do not have access to the signing key. End-users verify that the metadata file and packages downloaded from mirrors originate from the original repository by verifying digital signatures using a public portion of the signing key provided by the OS distribution community. ### 2.2. Package managers Operating systems use _package managers_ to simplify installation, update, and removal of software. The majority of distributions ship with package managers that use pre-built packages (, .rpm, .deb (DebianPackageSystem, ), .apk (AlpinePackageManagement, )), but some build software directly from sources (GentooPortage, ; ArchBuildSystem, ). In this paper, we focus only on the pre- built packages, which we refer further as _packages_. Figure 3. The internal structure of a software package, , Alpine APK package format. The package authenticity and integrity can be verified by using the digital signature and the content hash. The digital signature is stored inside the header, and is issued over the package control. The hash of the package contents is stored inside the meta-attributes of the package control. A package is an archive containing software-specific files, installation scripts, meta-information (such as dependency on other packages), and digital signatures. Figure 3 shows an example of a package in the Alpine Linux .apk format. The package header stores a digital signature issued by a developer with an offline signing key (a private key stored off the repository). The digital signature permits verifying the authenticity and the integrity of the package control, which contains installation scripts and meta-information describing package dependencies, software version, and a cryptographic hash of package contents. The hash permits verifying the integrity of executables, dynamic libraries, and configuration files stored inside the package. To install the package, the package manager first downloads it from the repository, or from middlemen such as a content delivery network (CDN) or mirrors. After that, it verifies that a trusted entity created the package. Finally, it runs installation scripts and extracts software-specific files to the file system. ### 2.3. Integrity measurements To bootstrap the computer, multiple low-level software components execute. They form a chain of trust by following the rule that every component calculates a cryptographic hash (an integrity measurement) of the next component before executing it. The measurements are stored in tamper-resistant memory of a hardware root of trust, , TPM (tpm_2_0_spec, ). Eventually, one of the components measures the bootloader, which measures and loads the kernel. At the kernel level, the integrity measurements continue. The Linux kernel integrity measurement subsystem (Linux IMA (tcg_ima_spec, ; ima_design_2004, )) measures each file, executable, or library before loading it to the memory. The list of all measurements, certified by a hardware root of trust (, TPM (tpm_2_0_spec, )), vouches for the system integrity (tcg_tpm_attestation, ). Integrity monitoring systems use the measurements to verify if only expected software executed on the computer since its bootstrap. ## 3\. Threats and challenges ### 3.1. Threat model We assume an adversary whose goal is to install vulnerable software on a remote computer by exploiting the software update mechanism. A remote computer is configured to install updates from TSR, which itself relies on the original repository and official mirrors. An adversary has root access to the machine running TSR and to the minority of machines hosting mirrors. In more detail, she controls up to _f_ mirrors out of a total of _2f + 1_ mirrors available to TSR. The adversary has access to all outdated packages that contain vulnerabilities, including outdated signed metadata files. By having root access to machines hosting TSR and mirrors, she can prevent network connection to the original repository and arbitrary mirrors. We assume that the OS distribution community, software maintainers, their internal processes (, software development, packages build), and infrastructure are trusted. In particular, packages are build using legitimate compilers; signing keys are well protected; the original repository provides the most recent software versions. We do not consider attacks resulting from the incorrect design of package formats and metadata, , the endless data attack and the extraneous dependencies attack (cappos_look_2008, ). The assumption is practical because main repositories hosted by the popular Linux distributions (, Debian, Ubuntu, RedHat, Alpine) and their corresponding package managers mitigate the attacks by digitally signing the metadata, which also includes packages file sizes and integrity hashes. The TEEs are vulnerable to side-channel attacks (Kocher2018spectre, ; vanbulck2018foreshadow, ). We exclude them from the threat model, assuming they can be addressed using dedicated tools (SpecLH2019, ; varys_2018, ; specfuzz_oleksi, ), by updating microcode (intel2018l1tf, ), or by excluding a particular type of hardware during the remote attestation protocol (johnson2016intel, ). ### 3.2. Problem statement Figure 4. Example of the package installation that changes the OS configuration and filesystem. Monitoring systems consider such a system compromised because the new OS configuration might, for example, allow an adversary to get remote access to the computer or remotely exploit vulnerabilities in the replaced dynamic libraries. We now introduce the main challenges and problems that shaped the TSR design. Problem 1: How to modify the package so that the changes made to the OS configuration and filesystem are verifiable by the monitoring system? The monitoring systems regularly verify that remote computers run only expected software in the expected configuration. Machines that fail the attestation might be restarted or reinstalled to bring the system back into the correct state. Also, there exist mechanisms to enforce OS integrity locally. Such mechanisms are built into the kernel (, IMA-appraisal (ima_appraisal, )), allowing the kernel to authorize each file before loading it to the memory. They make the integrity attestation more robust, preventing accidental or malicious changes to the filesystem. The main problem of applying trusted computing in production systems is, however, that software updates cannot be safely installed because they modify the OS configuration and change files in a way unknown to monitoring systems. Figure 4 shows why the package installation might move the OS into an untrusted state. After the package is downloaded (➊), the package manager executes software-specific installation scripts that modify the OS configuration (➋). Moreover, the package manager extracts software-specific files (➌), which contents are not known to verifiers. The integrity of the OS configuration files and software-specific files is measured by trusted computing components (➍). Eventually, a monitoring system uses remote attestation to read the measurements (➎), thus detecting the OS integrity change. The OS is considered compromised. A strawman approach consists of providing the monitoring system with a list of valid measurements before installing a new package. In practice, constructing such a list a priori is a difficult problem because of the complex nature of software dependencies, the OS configuration depending on the order in which software has been installed, and unpredictable schedules of security updates. Figure 5. Mirrors controlled by an adversary can provide outdated packages with known vulnerabilities (replay attack) or completely hide the presence of software updates (freeze attack). An adversary might prevent access to the original repository (the root of trust) forcing OS to rely on mirrors. Figure 6. High-level overview of trusted software repository (TSR). TSR is a proxy that modifies packages in a way they are safe to be installed in the integrity-enforced operating systems. TSR, TPM, and the integrity monitoring system are trusted. Problem 2: How to modify packages without changing the well-established package creation requiring community approval? Previous studies proposed changing the package creation process operated by different Linux communities to include digital signatures that vouch for individual file integrity (scalable_attestation, ). Although different approaches have been proposed (berger_dpkg_patch, ; garrett_dpkg_patch, ), they have not gained enough community approval and have not been merged into upstream repositories. Therefore, a practical solution should not require changes to the existing package creation processes, thus be transparent to the existing update infrastructure and processes. Problem 3: How to protect the signing key and to guarantee the correct generation of signatures in the presence of a powerful adversary with administrative access to TSR? If we assume that we know how to modify the package (problem 1), the OS would reject the modified package because its digital signature would not match the package contents. This is expected behavior because it prevents operating systems from installing packages tampered by an adversary. Therefore, a new package content must be certified again. However, without community support, it is impossible to issue the signature because the community would restrict access to the signing key (problem 2). An alternative approach is to let TSR generate a custom signing key, so it uses it to sign all modified packages. However, an adversary with access to the machine on which the signing keys are used might extract the signing key by simply reading the process memory using administrative rights or by exploiting memory corruption techniques (memory_corruption_techniques, ). Consequently, the adversary might sign arbitrary packages compromising all operating systems that trust the signing key. Problem 4: How to ensure access to the most up-to-date packages despite having no connection to the main software repository? Software repositories are maintained by the OS distributions and provide public access to packages and updates. We refer to such repositories as _original repositories_ because new versions of packages and software updates are published directly there. Although the secure choice would be to always rely on the original repository controlled by a trusted organization, such a decision would introduce a single point of failure. For this reason, original repositories propagate software updates to mirrors, which expose them to the wide range of end client machines. As reported by previous studies, an adversary controlling the mirror can serve outdated, vulnerable packages, decreasing the security of operating systems relying on that mirror (cappos_look_2008, ; cappos2008package, ). Figure 5 shows that an adversary might prevent OS from accessing the original repository, and forcing the OS to use mirrors under her control. ## 4\. Approach: Trusted Software Repository Our objective is to provide an architecture that: 1. _•_ provides software updates which can be safely installed in an integrity- enforced OS, 2. _•_ requires no changes to the process of how communities create and distribute software packages, 3. _•_ tolerates threats defined in §3. ### 4.1. Design Figure 6 shows a high-level overview of the TSR design. It consists of four components: _(A)_ an integrity-enforced OS measured by trusted computing components, _(B)_ a monitoring system which remotely verifies OS integrity, _(C)_ mirrors, copies of the original repository, containing OS-dependent software packages, _(D)_ TSR, an intermediate layer that provides the OS with access to software packages that are safe to install in an integrity-enforced OS. Now, we present how TSR integrates with the software update process. First, TSR fetches the most up-to-date packages from mirrors (➊) and modifies them in a way they are safe to install (➋). Next, the package manager queries TSR to collect information about the latest versions of packages. After selecting packages to update, it downloads them from TSR (➌). Then, the package manager installs them (➍), causing partial update of the existing OS configuration, replacement of existing files (, dynamic libraries), and extraction of new files into the filesystem. Trusted computing components regularly measure these changes, and the corresponding integrity measurements are stored inside a TPM chip (➎). The monitoring system collects the attestation report (➏), which next to integrity measurements, contains the corresponding digital signatures. After verifying the digital signatures and the integrity measurements, the monitoring system accepts a new state of the updated OS. Table 1. Number of packages with and without custom configuration scripts in Alpine Linux main and community repositories. Some packages (Safe=✗) contain scripts that break OS integrity. Alpine repository --- № packages in | | Main | Community | | 5665 | 5916 | Total | Safe 5531 | 5772 | Without scripts | ✓ 24 | 29 | With safe scripts | ✓ 110 | 115 | With unsafe scripts | ✗ ### 4.2. Solution to Problem 1: Sanitization To enable support for software updates, we must solve two problems. First, convince a monitoring system that the integrity measurements of files extracted from the software package to the OS are valid. Second, make sure that the execution of a software package installation script does not cause the transition of the OS into an untrusted state. To address these problems, we introduce the concept of package sanitization (Figure 6 (➋)). It consists of verifying and modifying packages by i) changing installation scripts to ensure that their execution changes the OS configuration in a deterministic way; ii) predicting such configuration; iii) including digital signatures of files delivered with the software package and the predicted OS configuration. Digital signatures Following the work of Berger et al. 2015, we propose that for each file stored inside a package, a corresponding digital signature certifying its integrity is also stored inside the package. The package manager would extract digital signatures to the filesystem, allowing the IMA to include digital signatures inside the attestation report. Consequently, the verifiers could recognize that the new integrity measurements are valid because they correspond to installation scripts and package-specific files. Installation scripts Software packages might contain scripts that are executed with administrative rights during the package installation. Developers or package creators provide such scripts, and there are no limitations on what kind of OS configuration changes scripts can do. Therefore it is possible that, due to a misconfiguration, a script reconfigures OS, allowing remote access to the machine. We designed TSR to modify packages in such a way the installation scripts change OS configuration deterministically. The packages which scripts cannot be sanitized are rejected from TSR, and thus not available for installation. Table 2. Operations performed by installation scripts located in software packages in Alpine Linux repositories. Some operations (Safe=✗) break OS integrity. The last column ("TSR") indicates which operations are safe after the sanitization. _Filesystem changes_ \- add/remove/modify folders, symbolic links, and their permissions. _Empty scripts_ \- conditional checks, display information. Operations executed in scripts --- № packages in | | | Main | Community | Type | Safe | TSR 30 | 15 | Filesystem changes | ✓ | ✓ 5 | 17 | Empty scripts | ✓ | ✓ 17 | 19 | Text processing | ✓ | ✓ 11 | 7 | Configuration change | ✗ | ✗ 1 | 0 | Empty file creation | ✗ | ✓ 97 | 104 | User/Group creation | ✗ | ✓ 4 | 6 | Shell activation | ✗ | ✗ To design the script sanitization algorithm, we started by analyzing existing scripts wrapped inside packages available in the Alpine Linux repositories111v3.11 of the Alpine Linux main (alpine_repo_main, ) and community (alpine_repo_community, ) repositories.. Table 1 shows that 97.6% of packages do not contain any scripts. 81% of the remaining packages contain scripts that alter the OS configuration, breaking the system integrity. We analyzed commands executed inside the scripts to understand how they interfere with the OS configuration. Table 2 shows that 45 packages modify the filesystem structure (, copying, moving, or removing files, directories, and symbolic links, also changing their permissions). From the OS integrity point of view, these actions are safe – they do not violate system integrity as defined by the IMA. Similarly, 36 packages execute text processing utilities (, parsing existing OS configuration), which do not alter any existing file; thus, they are safe. However, 230 packages contain scripts modifying the OS configuration, creating new users and groups, activating new shells, or creating empty files. These scripts are unsafe because they modify existing file contents in which integrity is certified using pre-generated signatures (as discussed in the previous section). Script sanitization As we show next, the majority of the unsafe scripts provide a predictable output. Hence it is possible to predict the OS configuration before installing the package. The installation or update of 201 packages results in the creation of new users or groups. In the case of Linux-based operating systems, three files are affected, , /etc/passwd, /etc/group, /etc/shadow. Interestingly, these files change in a deterministic way. Adding a new user or group results in adding a new well-defined line in at least one of these files. However, the order in which users and groups are created determines final file contents. In particular, different package installation order results in a different order in which users and groups are defined inside of each file. Our solution consists of scanning the entire repository to learn about all possible users and groups that might be added by any software package. Then, we change each installation script in each package in a way the script creates all possible users and groups in the same predefined order. Consequently, any selection of packages and their order always results in the same OS configuration – it contains all users and groups. Finally, TSR issues digital signatures over the predicted contents of the configuration files and modifies scripts to install the signatures in the target OS. Monitoring systems accept the new OS configuration because they read a measurement report containing the signatures, which vouch for the new configuration files contents. Our TSR implementation detected and sanitized two packages that not only create a user but also set an empty password and shell. Installation of such packages might cause a security breach by allowing an adversary to remotely connect to the OS using a well-known username and password (CVE_2019_5021, ). We reported our findings to the Alpine Linux community. Unsupported scripts TSR does not support 28 packages (0.24%) out of all packages available in Alpine repositories. In particular, TSR does not support packages in which installation changes arbitrary configuration files. For example, a package roundcubemail is not supported because it generates an unpredictable configuration file containing a random session key. Although TSR could support it by generating the session key during the sanitization, such a solution would contradict the script functionality that provides a unique key per OS. On the other hand, TSR intentionally does not support software packages providing different shells (, mksh, bash, tcsh). Their scripts modify the OS configuration by activating a newly installed shell using _add-shell_ command. Although TSR might use the same technique as with adding users and groups, we argue that the installation of a custom shell should not occur during an OS update but should instead be part of the initial OS configuration. ### 4.3. Solution to Problem 2: Proxy We designed TSR as a proxy between package managers and software repositories provided by the community. This design decision permits TSR to act as a separate software repository that serves sanitized packages signed directly by TSR. From the community point of view, no changes are required to the existing software package creation processes, software package formats, or the implementation of package managers. Package managers recognize TSR as a standard repository mirror. Hence, it is enough to adjust the OS configuration in a way the package manager uses only TSR as a mirror. ### 4.4. Solution to Problem 3: Shielded execution TSR requires a signing key to certify changes made to packages during the sanitization process. To protect the signing key from an adversary with root access to the machine, we propose to use TEE. In particular, we propose to leverage SGX, which is Intel’s central processing unit (CPU) extension providing confidentiality and integrity guarantees to applications running in environments in which OS, hypervisor, or basic input/output system (BIOS) might have been compromised. Other studies (palaemon_2020, ) demonstrated that applications running inside an enclave (a trusted execution environment provided by SGX) can generate, store, and use cryptographic keys that are only known to the specific application – not even a human being can read them. TSR’s design relies on that concept. By running TSR inside an enclave, TSR generates a signing key that is used later to sign all modified software packages. The public portion of the signing key is exposed to both operating systems and monitoring systems that use it to verify that software packages were created by TSR. Listing 1: Policy example ⬇ 1mirrors: 2 - hostname: https://alpinelinux/v3.10/ 3 certificate_chain: |- 4 -----BEGIN CERTIFICATE----- 5 (...) 6 -----END CERTIFICATE----- 7 - hostname: https://yandex.ru/alpine/v3.10/ 8 certificate_chain: |- 9 -----BEGIN CERTIFICATE----- 10 (...) 11 -----END CERTIFICATE----- 12 - hostname: https://ustc.edu.cn/alpine/v3.10/ 13 certificate_chain: |- 14 -----BEGIN CERTIFICATE----- 15 (...) 16 -----END CERTIFICATE----- 17signers_keys: 18 - |- # e.g., [email protected] 19 -----BEGIN PUBLIC KEY----- 20 (...) 21 -----END PUBLIC KEY----- 22 - |- # e.g., [email protected] 23 -----BEGIN PUBLIC KEY----- 24 (...) 25 -----END PUBLIC KEY----- 26init_config_files: 27 - path: /etc/passwd 28 content: |- 29 root:x:0:0:root:/root:/bin/ash 30 daemon:x:2:2:daemon:/sbin:/sbin/nologin 31 (...) 32 - path: /etc/shadow 33 content: |- 34 root:$6$UmJDHY...25/:18206:0::::: 35 daemon:!::0::::: 36 (...) 37 - path: /etc/group 38 content: |- 39 root:x:0:root 40 daemon:x:2:root,bin,daemon 41 (...) ### 4.5. Solution to Problem 4: Quorum An adversary might leverage administrative privileges to drop network traffic to certain hosts. In particular, she might prevent TSR from accessing the original repository, forcing TSR to rely on a mirror serving outdated software packages. As specified in §3, we assume that the majority of repository mirrors are available and provide the latest snapshot of the original repository. TSR does not trust any individual mirror. Instead, it reads _2f+1_ mirrors and only relies on the information that matches responses of at least _f+1_ mirrors. Importantly, TSR requires a quorum only when reading the metadata index. The packages can be downloaded from a single mirror because their integrity is verifiable using the metadata index. To allow different organizations to specify individual security requirements (, which mirrors to use, which package creators to trust) and to provide custom initial OS configuration (, initial users, groups, and passwords), TSR accepts security policies. 1 shows an example of such a security policy. The format permits defining a list of mirrors (lines 1-16) and a list of trusted package signers (lines 17-25). The package signer is a developer or a build system (, continuous integration and continuous deployment) that builds, signs, and deploys packages to the original repository. TSR enforces the security policy by publishing only software packages in versions offered by the majority of available mirrors and only created by trusted entities. The policy could be extended to support a private/closed variant in which an OS owner can specify a subset of supported software packages by specifying whitelist/blacklist of packages. Figure 7. The protocol of distributing the public portion of the signing key, which can be used to verify the authenticity of the software packages. Figure 7 shows how an organization can deploy a security policy to TSR. First, it establishes trust with TSR (➊) using SGX remote attestation protocol (johnson2016intel, ), which permits ensuring that TSR executes inside an enclave on the genuine Intel CPU. Then, it uploads the security policy (➋), causing TSR to generate a new signing key (➌), to store the security policy, and to return the public portion of the newly generated signing key (➍). Finally, the public key is distributed to all integrity-enforced operating systems and integrity monitoring systems (➎). At this point, the OS accepts sanitized software packages (➏), and the integrity monitoring system accepts integrity measurements of files digitally signed by TSR. In more detail, the integration between integrity monitoring systems and TSR consists of adjusting integrity monitoring systems configuration to trust TSR signing key. Hence, integrity monitoring systems accept integrity measurements signed by TSR. TSR returns the signing key during the repository initialization (§5.2) triggered by the OS owner (Figure 7). ## 5\. Implementation We developed TSR in Rust, a programming language that ensures memory safety (matsakis_rust_2014, ). We rely on the external Rust libraries, , Hyper (hyperRs, ), Rustls (rustls, ), to build the representational state transfer (REST) application programming interface (API) (fielding_information_2000, ). We use a Rust-based crypto library ring (rust_ring, ) to issue digital signatures. We use SCONE Rust cross-compilers (sconecuratedimages_rust, ) to execute TSR inside an SGX enclave. TSR is about 3.3k source lines of code, excluding external libraries. We rely on SGX because it provides the following properties: _confidentiality_ to protect the signing keys, _integrity_ to protect the sanitization process, and _attestation protocol_ to remotely ensure TSR integrity during the policy deployment. Alternative TEEs (keystone_tee, ; amd_sev_api, ; intel_txt_whitepaper, ; flicker2008, ) providing similar functionality might be considered but the threat model should be carefully adjusted, according to TEE-specific implementation. For example, TEEs relying on late-launch technologies (amd_sev_api, ; intel_txt_whitepaper, ; flicker2008, ) must assume trusted link between CPU and TPM (winter_hijackers_2013, ; winter_hijackers_bus_2012, ), while others, like Keystone (keystone_tee, ), must assume trusted boot process. ### 5.1. Supported package formats Our prototype implementation of TSR supports _apk_ packages used by Alpine Linux. We selected Alpine Linux because it is a popular security-oriented Linux distribution that minimizes the amount of software required to run the OS. It is an important property for systems relying on trusted computing. In the future, we plan to add support for other formats (, deb, rpm) used by other Linux distributions. ### 5.2. Repository initialization TSR can be executed in the cloud and is operated by a cloud provider, who is responsible for correct hardware initialization, installation of the operating system, and TSR execution. The cloud provider exposes the hostname on which TSR API is accessible by his clients. Multiple clients share a single TSR instance. Each client deploys a policy to create his individual, logically separated, software repository within the TSR instance. For each new repository, TSR, which runs inside an SGX enclave, generates a unique repository identifier and a unique signing key. The identifier and the public portion of the signing key are returned to the client as a response to the policy deployment request issued via https. Each client accesses his repository via the REST API after providing the identifier. By verifying the digital signature of the package, the client ensures that the package conforms to his requirements defined inside the policy. ### 5.3. Package sanitization We define package sanitization as an operation consisting of the following steps: verifying package integrity and authenticity, extracting files from the package archive, modifying the installation scripts (see §4.2), issuing digital signatures to all files inside the package, updating the metafile, and recreating the package. TSR issues digital signatures using the signing key generated during the policy deployment. The digital signatures are stored inside portable archive exchange (PAX) headers (pax_headers_format, ) of the tar archive (tar_format, ), which is logically equivalent to the package. The modern versions of tar extractors (, GNU tar (gnu_tar, )) transparently copy the specific PAX headers’ value into the extended attributes in the filesystem. Before opening a file, Linux IMA scans extended attributes and includes the digital signature inside a dedicated file (IMA log). Consequently, the monitoring systems read the measurement report and the IMA log. They check the integrity of every file measured by the IMA by verifying its digital signature included inside the IMA log. ### 5.4. OS configuration Software repositories include information about software packages sizes and hashes inside the repository metadata index to mitigate the endless data attack and the extraneous dependencies attack (cappos_look_2008, ). Operating systems read the package size and its hash from the metadata index to ensure they download the file of the expected size and contents. Because of that, when an OS requests TSR to return the metadata index for the first time, TSR downloads and sanitizes all packages listed in the upstream metadata index. Then, TSR generates a new metadata index that matches the sanitized packages and returns it. Although the first metadata index generation is time- consuming, subsequent requests require TSR to sanitize only packages that have changed on the upstream mirrors, since the previous read. Each integrity-enforced OS must be reconfigured to use the TSR repository instead of mirrors. Moreover, the OS must trust the packages signed by TSR; thus, the public portion of the signing key must be added to the list of trusted signers. This reconfiguration can be done automatically using configuration management systems such as Puppet (puppet, ) or Chef (chef_io, ). ### 5.5. Package caching A slow read of software updates increases the vulnerability window for the time of check to time of use (TOCTOU) attack, where an adversary exploits the existing vulnerabilities until the security patches become available in the repository. In the case of TSR, this time is increased by the sanitization process (see §4.2) and the time required to read the majority of available mirrors (see §4.5). To minimize the vulnerability window for the TOCTOU attack, TSR uses a local file system to cache the already sanitized packages, including the metadata index. TSR detects the outdated software packages each time TSR reads the new metadata index from the upstream mirrors. Consequently, TSR invalidates the metadata index, downloads the new version of the package, sanitizes it, and stores the new version inside the cache. An adversary might tamper with the cache by reverting software packages and the metadata index to the outdated versions. To mitigate the attack, TSR stores metadata indexes (the latest one read from upstream mirrors and the one reflecting the already sanitized packages) inside its memory, which integrity and freshness are guaranteed by SGX. TSR uses the first metadata index to check which software packages changed in the upstream mirrors. It uses the second metadata index to verify that the package read from the cache (untrusted disk) has not been rollbacked, before returning it to the OS. However, the data stored inside TSR memory is lost as soon as TSR is shutdown, for example, due to the OS restart. To preserve the metadata indexes across TSR restarts, we extended TSR implementation with support for TPM monotonic counter (MC) (tpm_2_0_spec, ). After generating the metafile, TSR increases the MC value and uses SGX sealing (anati2013innovative, ) to store the metadata indexes together with the MC value on the disk. The SGX sealing, and its revert operation unsealing, uses a CPU- and enclave-specific key. Hence, only the same enclave running on the same CPU can unseal the previously sealed file. After the restart, TSR unseals the metadata indexes from the disk together with the MC value and verifies that the unsealed MC value matches the current MC value. ## 6\. Evaluation In this section, we evaluate TSR to answer the following questions: 1. _•_ What is the overhead related to the package sanitization? 2. _•_ What are the performance limitations incurred by running TSR inside an SGX enclave? 3. _•_ What is the cost of tolerating compromised mirrors? Testbed. Experiments execute on a rack-based cluster of Dell PowerEdge R330 servers equipped with an Intel Xeon E3-1280 v6 CPU, 64 GiB of RAM, Samsung SSD 850 EVO 1TB. All machines have a 10 Gb Ethernet network interface card (NIC) connected to a 20 Gb/s switched network. The support for SGX is turned on; the hyper-threading is switched off. We statically configured SGX to reserve 128 MB of RAM for the enclave page cache (EPC) (costan2016intel, ). The CPUs are on the microcode patch level 0x5e. We run Alpine Linux 3.10 with enabled Linux IMA. ### 6.1. Package sanitization overhead The sanitization process directly influences the software update process, , time after which software updates are visible by the OS and the latency taken by the OS to download the update. For that reason, we run experiments in which we instrumented the sanitization process to measure its impact on packages from the main and community repositories of Alpine Linux. The results are based on a 20% trimmed mean from six independent experiment executions. How much time does it take to sanitize all packages? Table 3. Time required to initialize a repository. We assume two scenarios. In the optimistic one, TSR has access to a copy of packages stored in a cache. In the pessimistic one, during the policy deployment, TSR must download all packages from the original repository. Time | ---|--- pessimistic | optimistic | Operation 17 min | 0 min | Download packages < 1 min | < 1 min | Policy deployment 13 min | 13 min | Sanitize packages 30 min | 13 min | Total From the OS perspective, low repository initialization time results in faster delivery of software updates. Therefore, we calculated the time requires to create a new repository, , to download and to sanitize all packages. In the case of packages update, this time is expected to be significantly lower because TSR would have to download and to sanitize just a small amount of packages. Table 4. Spearman rank correlation coefficients ($\rho$) relating the package-specific properties and sanitization-specific operations. The corresponding p values are indicated by regular font in grey fields (p < 0.05), bold font in grey fields (p < 0.001); fields with regular font indicate p > 0.05. | number of files | package size ---|---|--- archive, compress | .46 | .61 check integrity | \- .62 | \- .93 generate signatures | .69 | .03 modify scripts | -.27 | \- .33 Table 3 shows the time taken to establish a new repository, assuming two scenarios. In the optimistic scenario, which takes about 13 min, TSR has access to pre-fetched packages, which are available, for example, pre-fetched by a service provider. In the pessimistic one, which takes about 30 min, TSR additionally downloads original packages (about 3 GB of data) from upstream repositories. We argue that the download time can be greatly reduced by enabling parallel downloading. This performance improvement is left as part of future work. What are the main factors driving the sanitization time? TSR sanitizes all packages provided with a software update, thus introducing a delay in how fast the OS receives the update. Therefore, it is important to understand the main drivers controlling the sanitization time. Table 4 shows the correlations between package-specific properties (, number of files inside a package, package size) and the proportional time contribution of certain components of the sanitization time. We observe a strong positive correlation ($\rho$ = 0.61) between the archive processing time and package size, which indicates that the archive, compression and decompression algorithms take more time to process bigger archives. Also, we observe a strong correlation ($\rho$ = 0.69) between signatures generation and the number of files inside a package. It confirms the intuitive expectation that in packages containing many files, the signature generation becomes a dominant factor of the sanitization time. Furthermore, we explain that a strong negative correlation ($\rho$ = -0.93) between checking the package integrity and package size shows that the time required to check the package integrity becomes negligible for bigger packages because other operations (, signature generation, archive, compression and decompression) become the dominant factors. All in all, we anticipate that the sanitization time is mainly driven by 1) extracting files from a package and compressing them again into a package, 2) issuing digital signatures. Figure 8. Time required to sanitize a package, depending on the number of files and size. Color represents package size after decompression. Packages which size exceeds the EPC are marked as $\blacktriangle$. Boxplots indicate 5th, 25th, 50th, 75th, and 95th percentile. How much time does it take to sanitize a package? To better estimate time which TSR requires to expose an update, we examine the time it takes to sanitize individual packages. Figure 8 shows the relationship between sanitization time and package-specific properties, such as the package size and the number of files inside the package. The sanitization time is not evenly distributed; it changes from 11 ms (50th percentile), 36 ms (75th percentile), 422 ms (95th percentile), to 30 seconds (100th percentile). What is the impact of sanitization on the repository size? Repository size is the sum of all packages served by the repository. The higher the size, the more resources (, disk space, bandwidth) are utilized. It not only increases the maintenance costs but also increases the latency because the OS requires more time to download packages. Figure 9 shows that the package sizes increase when compared to the original package size and the number of files located inside the package. In particular, the sanitization process increases package size by 12%, 27%, and 76% in 50th, 75th, and 95th percentile, respectively. Packages with many small files suffer most from sanitization because the sizes of file signatures (each signature is 256 bytes) constitute a dominant part of the total package size. However, the total repository size increases only by 3.6%, from 3000 MB to 3110 MB. Figure 9. Increase of package size caused by sanitization, depending on the number of files inside the package. Color represents size of a package (files are compressed into a single archive). Boxplots indicate 5th, 25th, 50th, 75th, and 95th percentile. Does the caching decreases the latency of package download? TSR implements caching to decrease the latency of accessing sanitized packages; it stores on the disk the original version of the package (the one fetched from upstream and not yet sanitized) and the sanitized one. We run an experiment in which we measured how much time does TSR require to respond to a download request, assuming three scenarios: _(i)_ only the original packages are cached, (_Original_), _(ii)_ both original and sanitized packages are cached (_Sanitized_), and _(iii)_ packages are not available in the cache (_None_). In the first scenario, TSR downloads packages from an official Alpine mirror located on the same continent (an average network latency 26.4 ms). In the last two scenarios, TSR reads packages from the local disk. In each scenario, we requested TSR to return every package available in the upstream Alpine repository sequentially. We calculated the latency of downloading each package as a 20% trimmed average from five repeated downloads. Figure 10 shows distributions of package download latencies for the scenarios mentioned above. Caching the sanitization results decreases the average download latency $129\times$ when compared to the scenario where TSR runs without cache. We anticipate that the latency variation (0.37 ms) is mainly caused by accessing the cache (, reading packages of different sizes) and verifying packages integrity after reading them from untrusted storage. Similarly, caching the original packages decreases the average download latency $2.7\times$ when compared to the scenario where TSR runs without cache. This is mostly the result of faster read of a package from the local disk than from a remote mirror accessed by the network. Figure 10. Comparison of package download latencies for scenarios in which TSR has access to original packages in the cache (_Original_), has access to already sanitized packages (_Sanitized_), and does not have access to any cached packages (_None_). What is the end-to-end latency of installing an update sanitized by TSR? Installation of a software update takes a considerable amount of time because a package manager must download and verify the update, prepare the system for the new package version (check dependencies, lock installed packages database), unpack the new software package, launch installation scripts, copy files, set permissions, and finally clean the filesystem from no longer necessary files. In this experiment, we check the end-to-end latency of installing an update, which consists of sanitized packages or native Alpine packages. We measure the update installation latency for more than 5000 packages cached in a repository, , TSR serves sanitized packages from the cache. Before launching the experiment for each single package, we install the package, and then we tamper with the OS configuration to pretend the installed package is outdated. We do it by modifying the package version number and its integrity hash stored in the file-based database used by the Alpine Linux to store information about installed packages. Before measuring the next package, we uninstall the previously measured package from the OS. Figure 11. End-to-end latency of installing software updates. Figure 11 shows the experiment results in which we use two repositories, TSR and Alpine mirror, located in the same data center. We assume differences between network latency in both setups to be negligible. An average update installation latency is 141 ms and 110 ms for TSR and Alpine mirror, respectively. The higher latency observed when installing sanitized packages is caused by installing digital signatures in the filesystem. ### 6.2. SGX limitations The current version of the SGX has a limited memory, up to 128 MB for SGXv1. Applications exceeding this amount cause SGX to swap the memory leading to performance degradation. Hence, we address the question of: What is the performance overhead of running TSR inside an SGX enclave? To answer this question, we observe that the package sanitization is the most memory consuming operation because TSR extracts and manipulates the package completely in the memory. For that reason, we executed TSR without SGX to measure the processing time of all available packages. Figure 12 shows the comparison of packages sanitization times executed inside and outside an SGX enclave. We observe a minor overhead of executing inside SGX; $1.18\times$ at 50th percentile, $1.12\times$ at 75th percentile, and $1.16\times$ at 95th percentile. However, at the top 5 percentiles that represent packages with sizes exceeding EPC, the SGX overhead increases to $1.96\times$ because of EPC paging. The total sanitization time required to process all packages in the repository increases from 9.5 min to 13.6 min ($1.43\times$) when running TSR inside an SGX enclave. Figure 12. Violin plot showing comparison of sanitization times executed inside and outside of an SGX enclave. Boxplots indicate 5th, 25th, 50th, 75th, and 95th percentile. ### 6.3. Tolerating compromised mirrors What is the overhead of mitigating compromised mirrors? In this experiment, we measured the latency in which TSR (running in Europe) returns the metadata index depending on the number of mirrors defined in the policy and their geographical locations. We were increasing the number of mirrors from one (default setting currently used by operating systems) to ten instances. We divided the experiment into four scenarios. In each scenario, TSR uses official Alpine mirrors located on different continents, , Asia, Europe, North America, and their combination (_All_). In each scenario, we calculated a 10% trimmed latency average from 20 consecutive requests. Figure 13 shows that the latency of downloading the metadata index depends on the number and location of mirrors. TSR returns the metadata index in less than 400 ms for up to five mirrors on the same continent. In the case of 10 mirrors, TSR returns the metadata index in less than 1.2 seconds. We observed higher latency when using mirrors located on different continents, mainly due to higher network latency. Figure 13. Latency of downloading the repository index from TSR. TSR instance is deployed in Europe. The last scenario (_All_) shows that the latencies measured when mirrors are evenly distributed across three continents are similar to the latencies measured when using mirrors located only in North America. It is a result of TSR implementation; TSR contacts the fastest _f + 1_ mirrors, and, in case they present different metadata index, it contacts additional mirrors until reaching the quorum (_f + 1_ responses are the same). Therefore, mirrors in Europe and North America were preferred, and TSR latency depends on the slowest selected mirror. It is the responsibility of the TSR clients to decide on the tradeoff between security and performance. The experiment shows that even when specifying nine mirrors distributed across different continents, TSR returns the metadata index in about 2.2 seconds. ## 7\. Related work Given the importance of software updates, a plethora of works has been proposed to ensure the security of software update systems (p2p_update_repositories, ; tuf, ; chainiac, ; kshot, ). Typically, they aim to protect the updates using cryptographic signatures and transfer them to targets via secure connections. The critical aspect of these approaches is how to protect the signing keys because their leakage compromises the update process. The Update Framework (TUF) (tuf, ) addresses the problem by assigning different roles for accessing specific signing keys, raising the bar for an adversary to get in possession of all keys. Unfortunately, TUF requires an online project registration; thus it cannot protect a community repository against several attacks, such as delivering arbitrarily modified packages. Diplomat (kuppusamy_diplomat_2016, ) overcomes the shortcoming of TUF by dividing signing keys into offline and online keys. The online keys are used to provide fast package signing, a feature required in community repositories. Only online keys are leaked in the case of a repository compromise, which is a manageable problem since they can be easily revoked and the repository with new online keys can be regenerated using well-protected offline keys. CHAINIAC (chainiac, ) provides mechanisms to secure the entire software supply chain. Developers create Merkel trees defining software packages with their corresponding binaries. To approve the package release, they sign and submit the trees to co-signing witness servers, which verify the signatures from developers as well as the mapping between the sources and the binaries. This mechanism relies on the blockchain technology, which permits the maintenance of the history of the releases but it increases the system’s complexity. With a similar goal but reduced complexity, in-toto (in-toto, ) offers a mechanism to ensure the integrity of the software supply chain cryptographically. It enables users with the integrity verification of the whole software supply chain. However, CHAINIAC, in-toto, and TUF do not consider the case that the target systems are under the protection of trusted computing mechanisms. Thus, they do not protect against integrity violations caused by software updates. Recently, KShot (kshot, ) introduced a secure kernel live patching mechanism to fix security vulnerabilities. KShot makes use of system management mode and SGX to perform the patching process without trusting the underlying OS securely. Similarly, TSR leverages SGX to protect the software update patching mechanism (sanitization), but TSR also ensures that software updates do not break the OS integrity. We selected Intel SGX to implement TSR since it has become available in clouds (IBMCloudSGX, ; AzureSGX, ), ported many of confidential cloud native applications including analytics systems (sgx- pyspark, ; securetf, ), key management system (palaemon_2020, ), and performance monitoring (teemon, ). TSR follows the idea introduced by Berger et al. (imasig_updates, ) to maintain custom mirror with modified packages containing digital signatures. Unlike the previous work, TSR removes the mirror owner from trusted computing base by protecting the signing keys using TEE. Also, TSR introduces the sanitization mechanism to enable the installation of packages containing installation scripts. Several previous studies also considered various security aspects of the mirrors in software update systems (cappos_look_2008, ; knockel_mitm_repositories, ; cappos_stork, ). Knockel et al. (knockel_mitm_repositories, ) indicated that man-in-the-middle attacks on third-party software are possible for open infrastructures. Fortunately, this can be handled by securing connections using modern TLS instead of outdated SSL technology. The Stork package manager (cappos_stork, ) provided mechanisms to handle various attacks from malicious mirrors by dedicating the selective trust to users, , users specify which packages they trust to install. Mercury (kuppusamy_mercury_2017, ) addresses the rollback attacks on software packages (cappos_look_2008, ; bellissimo_secure_software_update_2006, ) by maintaining a separated signed metafile at the package manager. However, Mercury did not address the problem of the first update in which a package manager cannot ensure the metadata index freshness. TSR tackles this problem by relying on the repository metadata index obtained from the majority of mirrors under the assumption that most mirrors are trustworthy. ## 8\. Conclusion In this paper, we presented TSR, a trusted software repository, to support secure software updates for integrity-enforced operating systems relying on trusted computing. TSR is transparent to the existing implementations of package managers and software repositories. Importantly, it does not require changes to well-established distribution-specific procedures of creating software packages. Our implementation supports 99.76% of the packages available in Linux Alpine main and community repositories. It can be hosted on-premises, , in the cloud, while maintaining strong security properties by running inside a trusted execution environment (TEE), enabling clients to define custom security policies, and permitting a minority of software repository mirrors to exhibit Byzantine behavior. Acknowledgment. We thank our shepherd Professor Hans P. Reiser and the anonymous reviewers for their insightful comments and suggestions as well as Bohdan Trach, Oleksii Oleksenko, Maksym Planeta, Robert Krahn, and Mimi Zohar for their feedback and help. The research leading to these results has received funding from the Cloud-KRITIS Project and LEGaTO Project (legato- project.eu), grant agreement No 780681. ## References * [1] SCONE RUST cross-compilers. https://hub.docker.com/r/sconecuratedimages/rust, accessed on 08/09/2019. * [2] StrongSwan an OpenSource IPsec implementation. https://www.strongswan.org, accessed on 15/09/2019. * [3] Alpine Linux. Alpine Linux community repository. http://dl-cdn.alpinelinux.org/alpine/edge/community/, accessed on 16/08/2019. * [4] Alpine Linux. Alpine Linux main repository. http://dl-cdn.alpinelinux.org/alpine/edge/main/, accessed on 16/08/2019. * [5] Alpine Linux. Alpine Linux package management. https://wiki.alpinelinux.org/wiki/Alpine_Linux_package_management, accessed on 16/08/2019. * [6] I. Anati, S. Gueron, S. Johnson, and V. Scarlata. Innovative technology for cpu based attestation and sealing. In Proceedings of the 2nd International Workshop on Hardware and Architectural Support for Security and Privacy, volume 13 of HASP ’13. ACM, 2013. * [7] Arch Linux. Arch Linux: Arch build system. https://wiki.archlinux.org/index.php/Arch_Build_System, accessed on 16/08/2019. * [8] A. Bellissimo, J. Burgess, and K. Fu. Secure software updates: Disappointments and new challenges. In Proceedings of the 1st USENIX Workshop on Hot Topics in Security, HOTSEC’06, page 7, USA, 2006. USENIX Association. * [9] S. Berger, K. Goldman, D. Pendarakis, D. Safford, E. Valdez, and M. Zohar. Scalable attestation: A step toward secure and trusted clouds. IEEE Cloud Computing, 2(5):10–18, Sep. 2015. * [10] S. Berger, M. Kayaalp, D. Pendarakis, and M. Zohar. File Signatures Needed! Linux Plumbers Conference, 2016. * [11] Brian Smith. Safe, fast, small crypto using Rust. https://github.com/briansmith/ring, accessed on 10/09/2019. * [12] J. Cappos, J. Samuel, S. Baker, and J. H. Hartman. A look in the mirror: attacks on package managers. In Proceedings of the 15th ACM conference on Computer and communications security - CCS ’08, page 565, Alexandria, Virginia, USA, 2008\. ACM Press. * [13] J. Cappos, J. Samuel, S. Baker, and J. H. Hartman. Package management security. University of Arizona Technical Report, pages 08–02, 2008. * [14] Cappos, Justin and Baker, Scott and Plichta, Jeremy and Nyugen, Duy and Hardies, Jason and Borgard, Matt and Johnston, Jeffry and Hartman, John H. Stork: Package Management for Distributed VM Environments. In Proceedings of the 21st Large Installation System Administration Conference, LISA’07. USENIX Association, 2007. * [15] C. Carruth. Speculative load hardening. https://llvm.org/docs/SpeculativeLoadHardening.html, 2019. * [16] Chef Software Inc. Chef. https://www.chef.io/chef/, accessed on 17/09/2019. * [17] I. Corporation. Strengthening Security with Intel Platform Trust Technology. In Intel Whitepaper, 2014. https://www.intel.com/content/dam/www/public/us/en/documents/white-papers/enterprise-security-platform-trust-technology-white-paper.pdf. * [18] V. Costan and S. Devadas. Intel SGX Explained. IACR Cryptology ePrint Archive, 2016. * [19] Debian Linux. Debian Linux: Debian package management. https://www.debian.org/doc/manuals/debian-reference/ch02.en.html, accessed on 16/08/2019. * [20] A. M. Devices. AMD Secure Encrypted Virtualization API Version 0.22. Technical Preview, 2019. * [21] R. T. Fielding. Architectural Styles and the Design of Network-based Software Architectures. PhD thesis, 2000. * [22] T. L. Foundation. The Update Framework Project. https://theupdateframework.github.io, accessed on 30/05/2020. * [23] I. Free Software Foundation. Basic tar format. https://www.gnu.org/software/tar/manual/html_node/Standard.html, accessed on 07/05/2020. * [24] I. Free Software Foundation. Tar - gnu project - free software foundation. https://www.gnu.org/software/tar/, accessed on 14/04/2020. * [25] P. W. Frields. Infrastructure report, 2008-08-22 UTC 1200. https://www.redhat.com/archives/fedora-announce-list/2008-August/msg00012.html, 2008\. * [26] Gentoo Linux. Gentoo Linux: Portage build system. https://wiki.gentoo.org/wiki/Portage, accessed on 16/08/2019. * [27] J. C. Gordon. Microsoft azure confidential computing with intel sgx, accessed on 12/09/2020. * [28] J. Greene. Intel Trusted Execution Technology Hardwarebased Technology for Enhancing Server Platform Security. In Intel Whitepaper, 2012. * [29] F. Gregor, W. Ozga, S. Vaucher, R. Pires, D. Le Quoc, S. Arnautov, A. Martin, V. Schiavoni, P. Felber, and C. Fetzer. Trust management as a service: Enabling trusted execution in the face of byzantine stakeholders. In Proceedings of the 50th IEEE/IFIP International Conference on Dependable Systems and Networks (DSN 2020), 2020. * [30] Hyper. Hyper. https://hyper.rs, accessed on 10/09/2019. * [31] IBM Corporation. IBM TPM Attestation Client Server. https://sourceforge.net/projects/ibmtpm20acs/, accessed on 15/09/2019. * [32] IBM Corporation. IBM’s TPM 2.0 TSS. https://sourceforge.net/projects/ibmtpm20tss/, accessed on 15/09/2019. * [33] IEEE and T. O. Group. The open group base specifications issue 7, 2018 edition, ieee std 1003.1-2017. https://pubs.opengroup.org/onlinepubs/9699919799/utilities/pax.html#tag_20_92_13_03, accessed on 14/04/2020. * [34] Intel Corporation. Resources and response to side channel L1TF. https://www.intel.com/content/www/us/en/architecture-and-technology/l1tf.html, 2018\. * [35] Intel corporation. Intel Security Libraries for Data Center (Intel SecL-DC). https://01.org/intel-secl, accessed on 15/09/2019. * [36] Intel corporation and National Security Agency. Intel Open Cloud Intergrity Technology. https://01.org/opencit, accessed on 15/09/2019. * [37] S. Johnson, V. Scarlata, C. Rozas, E. Brickell, and F. Mckeen. Intel Software Guard Extensions: EPID Provisioning and Attestation Services. In Intel Whitepaper, 2016. * [38] Joseph Birr-Pixton. rustls. https://github.com/ctz/rustls, accessed on 10/09/2019. * [39] P. Karnati. Data-in-use protection on ibm cloud using intel sgx, accessed on 12/09/2020. * [40] Knockel, Jeffrey and Crandall, Jedidiah R. Protecting free and open communications on the internet against man-in-the-middle attacks on third-party software: We’re foci’d. In Presented as part of the 2nd USENIX Workshop on Free and Open Communications on the Internet, Bellevue, WA, 2012. USENIX. * [41] P. Kocher, J. Horn, A. Fogh, , D. Genkin, D. Gruss, W. Haas, M. Hamburg, M. Lipp, S. Mangard, T. Prescher, M. Schwarz, and Y. Yarom. Spectre attacks: Exploiting speculative execution. In 40th IEEE Symposium on Security and Privacy (S&P’19), 2019. * [42] R. Krahn, D. Dragoti, F. Gregor, D. Le Quoc, V. Schiavoni, P. Felber, C. Souza, A. Brito, and C. Fetzer. TEEMon: A continuous performance monitoring framework for TEEs. In Proceedings of the 21th International Middleware Conference (Middleware), 2020. * [43] T. K. Kuppusamy, V. Diaz, and J. Cappos. Mercury: Bandwidth-effective prevention of rollback attacks against community repositories. In Proceedings of the 2017 USENIX Conference on Usenix Annual Technical Conference, USENIX ATC ’17, page 673–688, USA, 2017. USENIX Association. * [44] T. K. Kuppusamy, S. Torres-Arias, V. Diaz, and J. Cappos. Diplomat: Using delegations to protect community repositories. In Proceedings of the 13th Usenix Conference on Networked Systems Design and Implementation, NSDI’16, page 567–581, USA, 2016. USENIX Association. * [45] D. Le Quoc, F. Gregor, S. Arnautov, R. Kunkeland, P. Bhatotia, and C. Fetzer. secureTF: A Secure TensorFlow Framework. In Proceedings of the 21th International Middleware Conference (Middleware), 2020. * [46] D. Le Quoc, F. Gregor, J. Singh, and C. Fetzer. Sgx-pyspark: Secure distributed data analytics. In Proceedings of the World Wide Web Conference (WWW), 2019. * [47] D. Lee, D. Kohlbrenner, S. Shinde, K. Asanović, and D. Song. Keystone: An open framework for architecting trusted execution environments. In Proceedings of the Fifteenth European Conference on Computer Systems, EuroSys ’20, New York, NY, USA, 2020. Association for Computing Machinery. * [48] J. Li, P. L. Reiher, and G. J. Popek. Resilient self-organizing overlay networks for security update delivery. IEEE J.Sel. A. Commun., 22(1):189–202, Sept. 2006. * [49] C. Liebchen. Advancing Memory-corruption Attacks and Defenses. System Security Lab Fachbereich für Informatik Technische Universitaet Darmstadt, 2018. * [50] N. D. Matsakis and F. S. Klock, II. The rust language. In Proceedings of HILT, 2014. * [51] Matthew Garrett. dpkg patch. https://gitlab.com/mjg59/dpkg/-/commits/master, accessed on 22/04/2020. * [52] J. M. McCune, B. J. Parno, A. Perrig, M. K. Reiter, and H. Isozaki. Flicker: An execution infrastructure for tcb minimization. In Proceedings of the 3rd ACM SIGOPS/EuroSys European Conference on Computer Systems 2008, Eurosys ’08, pages 315–328, New York, NY, USA, 2008\. ACM. * [53] F. McKeen, I. Alexandrovich, A. Berenzon, C. V. Rozas, H. Shafi, V. Shanbhogue, and U. R. Savagaonkar. Innovative instructions and software model for isolated execution. In Proceedings of the 2nd International Workshop on Hardware and Architectural Support for Security and Privacy - HASP ’13. ACM Press, 2013. * [54] K. Nikitin, E. Kokoris-Kogias, P. Jovanovic, N. Gailly, L. Gasser, I. Khoffi, J. Cappos, and B. Ford. $\\{$CHAINIAC$\\}$: Proactive software-update transparency via collectively signed skipchains and verified builds. In 26th $\\{$USENIX$\\}$ Security Symposium (USENIX Security), pages 1271–1287, 2017. * [55] NIST. CVE-2019-5021. https://nvd.nist.gov/vuln/detail/CVE-2019-5021, accessed on 07/05/2020. * [56] O. Oleksenko, B. Trach, R. Krahn, A. Martin, C. Fetzer, and M. Silberstein. Varys: Protecting SGX enclaves from practical side-channel attacks. In USENIX ATC, 2018. * [57] Oleksenko, Oleksii and Trach, Bohdan and Fetzer, Christof and Silberstein, Mark. SpecFuzz: Bringing Spectre-type vulnerabilities to the surface. In USENIX Security Symposium, 2020. * [58] Puppet Inc. Puppet - server automation framework and application. https://puppet.com, accessed on 17/09/2019. * [59] RedHat, Inc. Critical: openssh security update. https://access.redhat.com/errata/RHSA-2008:0855, 2008. * [60] Safford, David and Kasatkin, Dmitry and Zohar, Mimi and Sailer, Reiner and Hallyn, Serge. An Overview of The Linux Integrity Subsystem. http://downloads.sf.net/project/linux-ima/linux-ima/Integrity_overview.pdf, accessed on 01/04/2020. * [61] R. Sailer, X. Zhang, T. Jaeger, and I. T. J. Watson. Design and Implementation of a TCG-based Integrity Measurement Architecture. In In Proceedings of the 13th USENIX Security Symposium. USENIX Association, 2004. * [62] J. Shin, B. Jacobs, M. Scott-Nash, J. Hammersley, M. Wiseman, R. Spiger, D. Wilkins, R. Findeisen, D. Challener, D. Desselle, S. Goodman, G. Simpson, K. Brannock, A. Nelson, M. Piwonka, C. Dailey, and R. Springfield. TCG D-RTM Architecture, Document Version 1.0.0. Trusted Computing Group, 2013. * [63] Slashdot Media. phpMyAdmin corrupted copy on Korean mirror server. https://sourceforge.net/blog/phpmyadmin-back-door/, 2012. * [64] Stefan Berger. [PATCH v2] Support for PAX extended header and Linux extended attributes. https://linux.debian.maint.dpkg.narkive.com/Jwr2kstj/patch-v2-support-for-pax-extended-header-and-linux-extended-attributes, accessed on 04/04/2020. * [65] S. Torres-Arias, H. Afzali, T. K. Kuppusamy, R. Curtmola, and J. Cappos. in-toto: Providing farm-to-table guarantees for bits and bytes. In 28th USENIX Security Symposium (USENIX Security), 2019. * [66] Trusted Computing Group. TCG Trusted Attestation Protocol (TAP) Information Model for TPM Families 1.2 and 2.0 and DICE Family 1.0. Version 1.0, Revision 0.36. https://trustedcomputinggroup.org/resource/tcg-tap-information-model/, accessed on 15/09/2019. * [67] Trusted Computing Group. TPM Library Part 1: Architecture, Family "2.0", Level 00, Revision 01.38. http://www.trustedcomputinggroup.org/resources/tpm_library_specification, accessed on 15/09/2019. * [68] Trusted Computing Group. TPM Library Specification, Family "2.0", Level 00, Revision 01.38. http://www.trustedcomputinggroup.org/resources/tpm_library_specification, accessed on 15/09/2019. * [69] Trusted Computing Group. TCG Infrastructure Working Group Architecture Part II - Integrity Management, Specification Version 1.0, Revision 1.0. https://trustedcomputinggroup.org/wp-content/uploads/IWG_ArchitecturePartII_v1.0.pdf, accessed on 21/09/2019. * [70] J. Van Bulck, M. Minkin, O. Weisse, D. Genkin, B. Kasikci, F. Piessens, M. Silberstein, T. F. Wenisch, Y. Yarom, and R. Strackx. Foreshadow: Extracting the keys to the Intel SGX kingdom with transient out-of-order execution. In Proceedings of the 27th USENIX Security Symposium. USENIX Association, August 2018. * [71] J. Winter and K. Dietrich. A Hijacker’s Guide to the LPC bus. In Proceedings of the 8th European conference on Public Key Infrastructures, Services, and Applications, Leuven, Belgium, September 2011\. * [72] J. Winter and K. Dietrich. A hijacker’s guide to communication interfaces of the trusted platform module. Computers & Mathematics with Applications, 2013. * [73] L. Zhou, F. Zhang, J. Liao, Z. Ning, J. Xiao, K. Leach, W. Weimer, and G. Wang. KShot: Live Kernel Patching with SMM and SGX. In Proceedings of the IEEE/IFIP International Conference on Dependable Systems and Networks (DSN), 2020.
16k
arxiv_papers
2101.01293
# The Panchromatic Hubble Andromeda Treasury: Triangulum Extended Region (PHATTER) I. Ultraviolet to Infrared Photometry of 22 Million Stars in M33 Benjamin F. Williams Department of Astronomy, University of Washington, Box 351580, U.W., Seattle, WA 98195-1580, USA Meredith J. Durbin Department of Astronomy, University of Washington, Box 351580, U.W., Seattle, WA 98195-1580, USA Julianne J. Dalcanton Department of Astronomy, University of Washington, Box 351580, U.W., Seattle, WA 98195-1580, USA Dustin Lang McWilliams Center for Cosmology, Department of Physics, Carnegie Mellon University, 5000 Forbes Ave., Pittsburgh, PA Leo Girardi Padova Astronomical Observatory, Vicolo dell’Osservatorio 5, Padova, Italy Adam Smercina Department of Astronomy, University of Washington, Box 351580, U.W., Seattle, WA 98195-1580, USA Andrew Dolphin Raytheon, Tucson, AZ 85726, USA Steward Observatory, University of Arizona, Tucson, AZ 85726, USA Daniel R. Weisz Astronomy Department, University of California, Berkeley, CA 94720, USA Yumi Choi Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD, 21218, USA Eric F. Bell Department of Astronomy, University of Michigan, 323 West Hall, 1085 S. University Ave., Ann Arbor, MI, 48105-1107, USA Erik Rosolowsky University of Alberta, Department of Physics, 4-183 CCIS, Edmonton AB T6G 2E1, Canada Evan Skillman Minnesota Institute for Astrophysics, 116 Church Street SE, Minneapolis, MN 55455 Eric W. Koch University of Alberta, Department of Physics, 4-183 CCIS, Edmonton AB T6G 2E1, Canada Christine W. Lindberg JHU/STScI, 3700 San Martin Drive, Baltimore, MD, 21218, USA Lea Hagen Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD, 21218, USA Karl D. Gordon Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD, 21218, USA Anil Seth University of Utah, Salt Lake City, UT 84112 Karoline Gilbert Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD, 21218, USA Puragra Guhathakurta University of California - Santa Cruz, 1156 High Street, Santa Cruz, CA, 95064 Tod Lauer National Optical Astronomy Observatory, PO Box 26732, Tucson, AZ 85726 Luciana Bianchi JHU, 3400 North Charles St., 473 Bloomberg center for Physics and Astronomy, Baltimore, MD, 21218 ###### Abstract We present panchromatic resolved stellar photometry for 22 million stars in the Local Group dwarf spiral Triangulum (M33), derived from Hubble Space Telescope (HST) observations with the Advanced Camera for Surveys (ACS) in the optical (F475W, F814W), and the Wide Field Camera 3 (WFC3) in the near ultraviolet (F275W, F336W) and near-infrared (F110W, F160W) bands. The large, contiguous survey area covers $\sim$14 square kpc and extends to 3.5 kpc (14 arcmin, or 1.5–2 scale lengths) from the center of M33. The PHATTER observing strategy and photometry technique closely mimics that of Panchromatic Hubble Andromeda Treasury (PHAT), but with updated photometry techniques that take full advantage of all overlapping pointings (aligned to within $<$5-10 milliarcseconds) and improved treatment of spatially varying PSFs. The photometry reaches a completeness-limited depth of F475W$\sim$28.5 in the lowest surface density regions observed in M33 We present extensive analysis of the data quality including artificial star tests to quantify completeness, photometric uncertainties, and flux biases. This stellar catalog is the largest ever produced for M33, and is publicly available for download by the community. Stellar Populations — ††facilities: HST(ACS/WFC), HST(WFC3/IR), HST(WFC3/UVIS)††software: Astropy (Astropy Collaboration et al., 2013, 2018), Astroquery (Ginsburg et al., 2017, 2019), Dask (Rocklin, 2015; Dask Development Team, 2016), DOLPHOT (Dolphin, 2000, 2016), Drizzlepac (STSCI Development Team, 2012; Hack et al., 2013; Avila et al., 2015), Matplotlib (Hunter, 2007), NumPy (van der Walt et al., 2011; Harris et al., 2020), Pandas (McKinney, 2010, 2011), Seaborn (Waskom et al., 2018), SciPy (Jones et al., 2001), Scikit-learn (Pedregosa et al., 2011), Vaex (Breddels & Veljanoski, 2018a, b) ## 1 Introduction Resolved stellar photometry has the potential to constrain fundamental processes in astrophysics, including star formation, stellar evolution, feedback into the interstellar medium, galaxy formation and evolution, and chemical enrichment. The stars themselves are the fossil record of these processes, which leave signatures in the properties of individual stars, their mass distribution, their distribution of colors and magnitudes, and their spatial distribution with respect to other galactic tracers. While Gaia is transforming our understanding of the stars and structure of the Milky Way disk (Gaia Collaboration et al., 2018), we still need comparably detailed population studies of other disks to put our Galaxy and its stellar populations in context. The best targets for such studies are the galaxies in the Local Group, which contains two spirals other than the Milky Way — M31, a “green valley” Sb galaxy, and M33, a blue sequence, star-forming dwarf spiral. Along with the Milky Way, these two galaxies form our best anchors for baryonic processes in spiral galaxies. This set of three galaxies spans a large dynamic range, giving ample opportunities for contrasting how astrophysical processes are shaped by other parameters. For example, both M31 and the Galaxy are of similar mass and metallicity (Watkins et al., 2010; Gregersen et al., 2015), but M31 seems to have a much more dramatic recent merger history (e.g., Hammer et al., 2018; D’Souza & Bell, 2018; Kruijssen et al., 2019). In contrast to both of these more massive partners, M33 is of lower mass and metallicity, and appears to have a relatively quiescent merger history, as suggested by its inside-out growth (Magrini et al., 2007; Williams et al., 2009; Beasley et al., 2015; Mostoghiu et al., 2018) and lack of a significant extended stellar halo (McConnachie et al., 2010; McMonigal et al., 2016) or prominent thick disk (Wyse, 2002; van der Kruit & Freeman, 2011). Thus, M33 probes a different set of physical and chemical evolution properties than the other Local Group disk galaxies, and we can constrain these in exquisite detail through measurements of M33’s constituent stars. Along with M31, previously surveyed with HST as part of the Panchromatic Hubble Andromeda Treasury (PHAT, Dalcanton et al., 2012), M33 is one of the richest galaxies in the Local Group for obtaining photometric measurements of resolved stars in a spiral galaxy. It is close enough that we can resolve stars all the way down to the ancient main sequence (Williams et al., 2009) over much of the disk. All of its stars are at the same distance and foreground extinction, alleviating issues related to the wide range of distances and extinctions of stars in the Galaxy. Furthermore, M33 has no significant bulge component beyond its nuclear cluster (McLean & Liu, 1996; Kormendy & McClure, 1993), , meaning that there is confusion between disk and bulge populations. M33’s value for obtaining knowledge about disk stellar populations is reflected in its rich history of resolved star studies, dating back to the 19th century (e.g, Roberts, 1899). Since then, ground-based observations have studied the bright massive stars in great detail, providing estimates of star formation rate and constraints on the evolution of massive stars (e.g., Madore et al., 1974; Humphreys & Sandage, 1980; Massey et al., 1996, and many others). The bright stars that can be resolved from the ground were finally fully cataloged by Massey et al. (2006), and its extended halo was probed by the Pan-Andromeda Archaeological Survey (PAndAS; McConnachie et al., 2010). More recent ground-based work focuses on the variability of these massive stars to further constrain their complex evolutionary stages (e.g., Gordon et al., 2016; Humphreys et al., 2017; Smith et al., 2020, and many others). Over the past few decades, past ground-based studies of M33 have been supplemented with HST imaging, both farther into the ultraviolet (e.g., Chandar et al., 1999; Hoopes & Walterbos, 2000) and to much fainter depth in the optical (e.g., Mighell & Rich, 1995; Sarajedini et al., 2000; Barker et al., 2007a, b; Williams et al., 2009). These capabilities have provided deep insight into the properties of the youngest and oldest stars and stellar clusters, as well as the formation processes of the M33 disk (e.g., van der Kruit & Freeman, 2011, and references therein). M33’s stellar population studies benefit from the legacy of surveys across virtually all wavelengths. Its cold interstellar medium (ISM) has been mapped through 21cm maps of atomic Hi (Deul & van der Hulst, 1987; Gratier et al., 2010; Koch et al., 2018), through millimeter maps of molecular gas in the CO($J=1-0$) and CO($J=2-1$) lines (Figure 2, see e.g., Heyer et al., 2004; Gratier et al., 2010; Engargiola et al., 2003; Rosolowsky et al., 2003, 2007; Druard et al., 2014), and through extensive studies of dust through Spitzer (Hinz et al., 2004; McQuinn et al., 2007), Herschel (most notably the HerM33es project; e.g., Kramer et al., 2010; Xilouris et al., 2012), and long- wavelength facilities like APEX and Planck (e.g., Hermelo et al., 2016; De Paolis et al., 2016; Tibbs et al., 2018). M33 has been mapped with GALEX near and far UV (Thilker et al., 2005) hard X-rays from NuSTAR (West et al., 2018), softer X-rays from Chandra (Tüllmann et al., 2011) and XMM-Newton (Williams et al., 2015), gamma rays from Fermi (Xi et al., 2020), and deep radio continuum White et al. (2019). This rich compendium of multi-wavelength data, and its associated catalogs, can serve both to support interpreting M33’s resolved HST photometry, and to be interpreted in turn by improved knowledge of M33’s stellar content and spatially resolved star formation history. Recently, the power of wide-area, panchromatic imaging of nearby galaxies has been demonstrated by PHAT (Dalcanton et al., 2012; Williams et al., 2014), which covered roughly one third of the star forming disk of M31 in 6-bands ranging from the near-UV to the near-IR. The PHAT survey has produced scientific return spanning a wide range of topics, from star clusters (Johnson et al., 2015) and the initial mass function (IMF) (Weisz et al., 2015) to star formation history (Lewis et al., 2015; Williams et al., 2017), calibration of star formation rate indicators (Lewis et al., 2017), metal retention (Telford et al., 2019), mass-to-light ratios (Telford et al., 2020), and dust (Dalcanton et al., 2015). In this paper we present first results from an equivalent high-resolution, 6-band survey of M33, so that we may provide a resolved stellar photometry catalog with the same quality, giving the community the ability to probe the same processes in a galaxy with very different physical properties, including lower mass, lower metallicity, and higher star formation intensity. Once the stellar populations of both galaxies are measured in such exquisite detail, the power of direct comparison will likely lead to even more illuminating results. Herein we describe our large HST survey of M33, PHATTER. Section 2 describes our observing strategy and data reduction techniques. Section 3 provides our results, including our final catalog of 6-band panchromatic photometry of all of the stars detected in our observations. Section 4 then investigates the quality of the photometry in the catalog, including analysis of the luminosity function and artificial star tests. Finally, Section 5 summarizes the paper. Throughout, we assume a distance to M33 of 859 kpc ($m-M=24.67$; de Grijs et al., 2017). ## 2 Observations and Data Analysis ### 2.1 Observing Strategy The highest impact science we anticipate from the new M33 observations comes from exploring galactic environments that are distinct from other Local Group galaxies. Our observing strategy was therefore designed to make comparisons as straightforward as possible, by reproducing the observing strategy for M31, but targeting regions of M33 with complementary properties. As shown in Figure 1, M33 is a lower metallicity galaxy than most of M31 at the present day, and its inner regions nicely bridge the metallicity gap between the LMC and M31’s outer regions . Those same inner regions of M33 also have a typical star formation rate intensity that is nearly a factor of 10 higher than in the area covered by the PHAT survey in M31, adding considerable leverage to studies of the interaction between stars and the ISM. We therefore targeted the new M33 observations on these inner regions, where there is also considerable multiwavelength coverage from other observatories, . We build up this survey area using the same PHAT tiling strategy, as described in Dalcanton et al. (2012). Observations are organized into “bricks” of 3$\times$6 WFC3/IR footprints (Figure 3), with observations of each 3$\times$3 half-brick taken $\sim$6 months apart, after the telescope has rotated 180∘ (see Figure 3), with ORIENT=55 for one half brick and 235 for the other. At each pointing, WFC3/UVIS observations are taken in one orbit and WFC3/IR observations are taken in another, while ACS/WFC operates in parallel observing in covering the adjacent half brick. When the telescope rotates orientations in $\sim$6 months, the primary WFC3 observations cover the area of the original ACS parallels, and vice versa. Note that this produces a time difference between the optical and UV$+$IR observations, which may produce unusual colors for time-varying sources. Observations for this program (GO-14610) were taken between February 21, 2017, and February 25, 2018. The downloaded calibrated images used for photometry were processed under OPUS versions 2016_2 - 2017_3b. For ACS/WFC and WFC3/UVIS, we start with the CTE- corrected (Anderson & Bedin, 2010; Anderson & Ryon, 2018), flc image files. For WFC3/IR, we start with flt image files. We chose a 3-brick mosaic to maximize coverage of the high star-formation intensity regions and existing CO detections (Figure 3). Brick 1 is the 3$\times$6 array covering the northern portion of the galaxy, Brick 2 covers the center, and Brick 3 is to the south. Within each Brick, each WFC3/IR pointing area is given a field number, with Field 1 being the upper left on Figure 3 and Field 18 being the lower right. ACS observations are labeled with the WFC3 field that they overlap. Of the 54 pointings, one field (Brick 2, Field 5) had no guide stars available in the desired orientation, and was therefore rotated slightly to make observations possible. This change led to a slight ($\sim$20 arcsec) gap in coverage at the northwest corner of Brick 2 (01:33:30, 30:44:00). In total, the survey area tiled the inner 13.2$\times$19.8 arcmin (3.1$\times$4.6 kpc, projected; 4.3$\times$4.6 kpc, deprojected) of M33, extending to roughly $\sim$1.6 disk scale lengths, assuming a $6^{\prime}$ scale length (Regan & Vogel, 1994). We adopted an identical exposure sequence (Table 1) and dithering strategy as in PHAT, with the only significant change being switching to using UV pre- flash to minimize CTE losses in WFC3/UVIS (FLASH$=10$ for F336W and $=11$ for F275W). WFC3/IR exposures were taken with 13 MULTIACCUM non-destructive read samples of the STEP100 sequence for a single F110W exposure, three F160W exposures with 9 samples of the STEP200 sequence, and one additional F160W exposure with 10 samples of the STEP100 sequence. The adopted dithers are designed to produce Nyquist sampled images in F475W, F814W, and F160W, but do not fill in the ACS chip gap. Instead, the ACS chip gaps are filled by overlapping exposures from observations in adjacent fields. The two WFC3/UVIS exposures for each filter are dithered to fill the chip gap, but, have challenging cosmic ray rejection, due to having only 1-2 overlapping images. The ACS observations also include very short “guard” exposures in F475W (10 seconds) and F814W (15 seconds) to capture photometry for the brightest stars, which can be saturated in the longer individual exposures. A table of the exposures at each position is supplied in Table 1. The resulting map of exposure times in all cameras is shown in Figure 4. Notable features are the slightly larger WFC3/UVIS fields of view, which lead to large rectangular overlaps between adjacent fields than the minimally overlapping WFC3/IR fields, and the diagonal overlaps of the even larger ACS/WFC exposures. Some of the inconsistencies in the tiling pattern are due to adjustments that ensured coverage with the non-standard Brick 1, Field 5 rotation. The most highly overlapped regions in F475W and F814W have over 30,000 seconds of total exposure time. However, because the majority of observations in the optical and IR are crowding-limited, rather than photon limited, the varying exposure times due to the overlapping pointings tend to affect the measured source density in less obvious ways that are often only noticeable at faint magnitudes. ### 2.2 Photometry We measured point spread function (PSF) fitting photometry on the location of every star detected in our survey footprint on every exposure that covered the position of the star. We closely followed the process used for the PHAT survey photometry to simplify comparisons; however, there have been some improvements made to the process from lessons learned by PHAT. The first improvement was the use of charge transfer efficiency (CTE) corrected flc-type) for photometry. In PHAT, no correction was used for WFC3/UVIS photometry, and ACS/WFC photometry was corrected at the catalog level. This change was implemented to address systematic uncertainties that appeared to be related to CTE in Williams et al. (2014) at the faint end. In addition, we implemented spatially-varying TinyTim PSFs (Krist et al., 2011) for all cameras to address the systematic uncertainties that appeared to be related to the PSF in Williams et al. (2014). A high-level overview of the process of measuring the stellar photometry is provided as follows. The first step was the astrometric alignment of all 972 individual exposures with the Gaia catalog. These images were then combined into mosaic images, which were used for identifying and flagging bad pixels and cosmic ray affected pixels for masking during photometry, as well as for public release images111https://hubblesite.org/image/4305/gallery. The aligned individual images were processed with the DOLPHOT software package (Dolphin, 2000, 2016) to measure PSF corrections and aperture corrections, which largely correct for variations in telescope focus. All overlapping individual exposures were stacked in memory to search for all statistically-significant detections using the full survey depth. At each detected centroid, the appropriate PSF was fit to the detection’s locations in each of the overlapping exposures, for all filters simultaneously. DOLPHOT then reported the measured fluxes and corresponding magnitudes in each image, as well as the combined flux and magnitude in each observed band. Finally, the raw photometry output was processed to flag possible artifacts and generate summary catalogs containing a subset of the many thousands of columns required to describe the complete measurement suite. We describe each of these steps in detail below. #### 2.2.1 Astrometric Alignment & Mosaicking We aligned all flc (ACS/WFC, WFC3/UVIS) and flt (WFC3/IR) images to the Gaia DR2 astrometric solution following the workflow presented by Bajaj (2017)222https://github.com/spacetelescope/gaia_alignment. Using this workflow, a reference astrometric catalog was retrieved from the Gaia archive with astroquery (Ginsburg et al., 2017, 2019), which was then passed to the TweakReg function in the Drizzlepac package (STSCI Development Team, 2012; Hack et al., 2013; Avila et al., 2015), which finds centroids in each image and matches triangular patterns and updates the image headers with the resulting aligned astrometric solution. The catalogs from which the final alignment solution was derived typically contained several hundred stars per ACS/WFC pointing, and 50-200 stars per WFC3/UVIS or WFC3/IR pointing. The RMS dispersions of the alignment residuals in $X$ and $Y$ are shown for all frames in Figure 5. Typical overall residual dispersions are on the order of 3 mas for ACS/WFC and WFC3/UVIS, and 7 mas for WFC3/IR. We used the AstroDrizzle function of the Drizzlepac package (STSCI Development Team, 2012; Hack et al., 2013; Avila et al., 2015) to combine the images within each band into a distortion-corrected, high-resolution pixel array (0.035″/pixel in all bands, combined with a lanczos3 kernel). This higher resolution array allows the full camera resolution to be recovered from dithered images, which were Nyquist sampled in F475W, F814W, and F160W. A minmed filter flagged statistical outlier pixels on the input exposures for all filters except F110W, for which there is only a single exposure, forcing us to rely on up-the-ramp fitting to flag bad pixels and filter cosmic rays. These pixels were not considered when generating the combined image, and they can easily be masked in any further analysis using those exposures. The flagged images were then combined with astrodrizzle, weighted by exposure time to produce deep mosaics that take advantage of sub-pixel dithering to improve spatial resolution. An example of the improvements in depth and resolution is shown in Figure 6. The final product from the F475W exposures, which is the deepest band with the most sub-pixel dithers, was then applied as the reference image for all of the photometry measurements, #### 2.2.2 Preparing Individual Exposures After updating data quality extensions of the individual exposures in the astrodrizzle step, we further prepared the individual exposures for photometry with DOLPHOT. This preparation starts with running the task acsmask or wfc3mask (depending on camera) on each exposure. This task masks the flagged pixels in the DQ extensions of each CCD in each exposure and multiplies the image by the appropriate pixel area map to take into account the effects of distortion on the flux measured in each pixel. We also run this step on the full-depth F475W combined image, which serves as the reference image for the final photometry. DOLPHOT uses this image as the reference frame to which all of the individual exposures will be aligned in memory, and from which all of the final star positions will be reported. As such, it is beneficial to use the deepest and highest spatial resolution image for this purpose. We then ran the splitgroups task to produce separate files for each CCD of each exposure, and then we ran calcsky on each of these individual frames to generate maps of the sky level in each exposure. These sky files, which are simple smoothed versions of the original images, are used by DOLPHOT to find an initial list of statistically significant centroids to align each frame to the reference image; in spite of their name, they are not actually used for measuring the true sky level, which instead is measured in a much more sophisticated way described in Section 2.2.3. We then ran DOLPHOT on each individual exposure, to measure the central PSF and aperture corrections of each CCD read, followed by running DOLPHOT’s alignment on the full stack of CCD reads to determine and record the parameters that align each individual frame to the reference image. #### 2.2.3 Running DOLPHOT on Full Image Stacks With images, alignment parameters, PSF corrections, and aperture corrections for each individual exposure in hand, we could run full-stack photometry on any region of the survey. We ran these stacks using the DOLPHOT parameters updated from those of the PHAT survey to optimize the resulting catalogs for stellar populations science. The main updates are the removal of catalog-level CTE corrections, because we used the on-image CTE corrections (flc images), and the use of TinyTim PSFs for all cameras and filters. Values of all of the adopted DOLPHOT parameters for our reductions are provided in Table 2. Memory and time limitations prevent us from simply putting the entire set of M33 exposures into DOLPHOT simultaneously. Instead, we subdivided the data into separate stacks to measure the photometry of different regions of the survey in parallel. We used DOLPHOT parameters that allow the user to define the region within which it performs photometry to launch multiple photometry processes, each with a different region of the survey including all overlapping individual images. We made these regions sufficiently small that DOLPHOT could complete the PSF fitting photometry in a reasonable amount of clock time, typically about one week. We set up 54 separate processes, each covering $\sim$4 square arcminutes of the survey area, overlapping by 100 pixels on a side to avoid introducing edge effects. We then merged the resulting catalogs along the centers of the regions’ overlaps to produce one final catalog for the survey. We then checked for any edge effects from the survey division by plotting the densities of stars. Such a plot is shown in Figure 7, #### 2.2.4 Flagging and Processing Photometry Output DOLPHOT returns a comprehensive table of all of the measurements made on every PSF fit to every image, as well as the combined measurement of every source in every filter. These measurements include the flux, Vega system magnitude, count-based uncertainty, signal-to-noise ratio, and several measurements of how well the source was fitted by the PSF. These quality metrics include sharpness, roundness, $\chi$, and crowding. Full descriptions of these are included in the DOLPHOT documentation333http://americano.dolphinsim.com/dolphot/. Briefly, the sharpness parameter measures how centrally peaked the source is compared to the PSF, or how much flux is concentrated in its central pixels relative to the outer ones. High values signify a source with high central concentration, such as a hot pixel or cosmic ray. Low values indicate that the source is not peaked enough, as expected for blended stars or background galaxies. The roundness parameter measures how circular the source is (zero is perfectly round), and $\chi$ provides an estimate of the overall goodness of fit to the PSF. The crowding parameter measures how much the source’s photometry is affected by neighboring sources. The larger the crowding value, the more densely packed the PSF radius is with other sources, and the more likely it is that the reported magnitude has systematic uncertainties due to subtraction of neighbors. For the PHAT survey, we determined values for the DOLPHOT parameters that tend to indicate good measurements of real stars (Williams et al., 2014). We have adopted these criteria for this catalog as well, and list them here for convenience. For a complete description of how they were determined, see Williams et al. (2014). They are different for each camera, as the pixel scale and PSF sampling were different, with the exception of the signal-to-noise ratio, for which we require $\mathrm{S/N}>4$ for all cameras. For ACS, the other parameters are: sharpness${}^{2}<0.2$ and crowding$\ <2.25$. For UVIS, they are: sharpness${}^{2}<0.15$; crowding$\ <1.3$, and for WFC3’s IR channel they are sharpness${}^{2}<0.15$; crowding$\ <2.25$. These culling parameters were found to have the best balance of removing a high fraction of sources outside of color-magnitude diagram (CMD) features while keeping a very high fraction of total measurements. Thus, they lean towards inclusive to avoid over-culling the data, at the expense of allowing a larger fraction of less certain measurements and contaminants. We show the results of the above cuts on the CMD in Figure 8, where the stars that pass the metric make a CMD with well-defined, well-populated features, whereas the rejected stars form a relatively featureless cloud of points. However, there is always some risk in excluding important individual detections that did not produce high-quality PSF fits, such as bright stars in clusters. Thus, we include in our catalog all measurements, but we add a flag column to each band indicating whether it passes. This method allows the user to search the full catalog for specific source, but also allows one to easily look at populations without being distracted by artefacts. For science cases that require a very clean sample, we recommend going to the full catalog and applying more conservative culling criteria than those adopted for our quality columns reported here. The CMDs in all bands for the stars that pass our GST quality checks are shown in Figures 9-13. These figures also show, in the upper panels, the fraction of accepted measurements over the same CMD space. In general, the highest impact of any metric on the culling of the data is the signal-to-noise ratio, which culls 100% of the the measurements fainter than the detection limit in each band. However, in the IR, the quality metrics greatly reduce the amount of scatter in the CMD features at the faint end, as demonstrated by the low fraction of passing measurements up to 2 magnitudes brighter than the detection limit in F160W in the crowded central regions. This difference is mainly attributable to the lower spatial resolution in the IR, which increases the impact of crowding, making more unreliable measurements that fall outside of the main features of the CMD. We also show the effects of the depth in each band on our recovery of different features in Figure 14. Here a representative subsample of stars is plotted on CMDs color-coded by the number of bands in which they were detected. It is clear from this figure that the UV observations are our shallowest, as nearly every UV detection is also detected in all of the other bands, and no RGB stars are detected in the UV. On the other hand, nearly every star in the catalog is detected in the optical, and all but the faintest main-sequence stars are detected in the IR. It is important to keep these depth effects in mind when working with the catalogs to perform analysis on the populations present in M33. ### 2.3 Artificial Star Tests We quantify the accuracy, precision, and completeness of our photometry through artificial star tests (ASTs), wherein artificial stars with known parameters are injected into the data and then recovered (if possible). ASTs place stars with realistic spectral energy distributions (SED) at a fixed sky position in each overlapping input image. We then put those images through the same photometry routine as the original data, and compare the output measurements for the star to the input values. If the star is not recovered by the photometry routine, that is also recorded. We repeat this process many thousands of times in many locations in the survey to characterize the quality of our photometry catalogs as a function of survey stellar density, We describe each step in detail below. We generated input artificial star magnitudes with MATCH (Dolphin, 2002) using the fake utility to produce a simulated 6-band photometric catalog sampled from the MIST model suite (Choi et al., 2016). We used two age bins, 1 Myr to 1 Gyr and 8 to 16 Gyr, and a metallicity range of $-2<\mathrm{[Fe/H]}<0.5$, which together span sufficient color space to be applicable to the majority of our photometry. We restricted the optical magnitudes to $15<\mathrm{F475W}<31$ and $17<\mathrm{F814W}<30$, but left the UV and IR magnitudes effectively unconstrained. To ensure sufficient sampling of bright stars, we used a top- heavy IMF. CMDs of the final AST inputs are shown in Figure 15. We select four regions roughly along the major axis that span the full range of stellar densities, as shown in the right panel of Figure 7. For each region we create input lists of 50,000 artificial stars with random XY locations, for a total of 200,000 ASTs. We run the stars from the input AST lists through our photometry routine one at a time, such that the ASTs were not able to affect one another. DOLPHOT’s output in AST mode includes the location and flux of each input star, followed by all of the output that is reported for all of the unaltered data. Quality metrics that were used to flag measurements in the star catalog can then be applied to the AST catalog for consistency. we consider an artificial star to be “recovered” in a given band if it is within 2 reference frame pixels ($0.07\arcsec$) of the input source position , and fulfills the GST (“good star”) quality requirements for said band discussed in Section 2.2.4. Figure 16 and Table 3 provide the completeness as a function of magnitude as well as the magnitude $m_{50}$ at which 50% of inserted artificial stars are recovered (the “50% completeness limit”). For typical astronomical point sources, this completeness limit is largely set by the number of photons detected from an astronomical source. However, at high stellar densities, the completeness limit is set by the magnitude at which the surface density of sources (i.e., # of sources per square arcsec) is so high that they are always blended with brighter sources, rendering the original source undetectable. In this “crowding limited” (rather than “photon limited”) regime, the limiting magnitude is set more by stellar density than by photon counting statistics. In both M31 and M33, HST imaging is crowding limited in the optical and NIR over much of the disk, with the effects being most significant in the NIR where the larger pixel scale and PSF size severely limit detection and reliable measurement of faint stars. In contrast, the PHAT and PHATTER observations in the UV bands are sufficiently shallow that they do not reach magnitudes where UV-detectable stars are so numerous that they begin to crowd together. For M33, the UV observations reach F275W$\sim$24.5 relatively independent of stellar density , as expected for photon-limited images, whereas for the optical, the depth changes by $\sim$1.3 magnitudes moving from the inner to outer disk, reflecting the role that stellar crowding plays in setting the detection limit. In addition, the variation in completeness with magnitude (Figure 17) is qualitatively different in the photon-limited and crowding-limited data. In the former, the completeness drops from near 100% to 0% over a narrow range in magnitude ($\lesssim$1) , whereas in the crowding- limited data, the roll-off in completeness is much more gradual with magnitude ($\gtrsim$2), such that stars begin to be “hidden” by crowding several magnitudes before the magnitude at which they disappear from the catalog. The slow roll-off in completeness is the result (in part) of increasing odds that a star will fail the quality cuts with the increasing likelihood of it blending with a star of comparable flux. As with completeness, photometric uncertainties reflect impacts from both photon-counting uncertainties and crowding. There are multiple contributors to photometric uncertainty and bias in crowded-field photometry beyond the well- known impact of photon-counting statistics for the source and sky. These effects include uncertainties and biases from deblending of neighbors and sky estimation, as well as brightward biases from blending with undetected sources (which also increases the chance of detection). These effects are captured well by artificial star tests, though other systematic effects due to CTE or imperfect PSF models will remain. These various drivers of uncertainty and bias — crowding, exposure time, and background — all vary among filters and cameras, and thus will have different behavior in each. We summarize the AST results for uncertainties and bias in Figure 18 and Table 4. Figure 18 shows the median difference in magnitude between the recovered and input magnitudes (recovered - input) as a function of input magnitude, and the 16th and 84th percentile ranges for the distribution of differences, shown as solid and transparent lines, respectively, plotted for a range of mean local densities (different color lines, with darker, thinner lines indicating higher stellar densities). Positive values indicate sources that are recovered at fainter magnitudes than their true magnitudes. Table 4 compiles numerical measurements of the bias and uncertainty for different filters and source densities. The uncertainty is also reported in units of the DOLPHOT-reported photometric uncertainty, which is based entirely on photon-counting uncertainties. The measured scatter between the true and recovered magnitudes is typically $\sim$20% larger than the photon-counting uncertainty in the NUV, a factor of $\sim$4 larger in the optical, and a factor of $\sim$5 larger in the NIR. Figure 18 shows that, as expected, both the bias and and the measurement uncertainty increase towards fainter magnitudes, where photon counting and crowding is worse. The biases are much smaller than the photometric uncertainties (typically by a factor of 2-4) at all but the very faintest limits, where very few sources would be recovered at all. At a fixed magnitude in the optical or NIR bands, the biases and uncertainties are larger in regions with higher source densities, due to the higher crowding. In the optical and NIR bands, as sources become intrinsically fainter, their measured fluxes tend to be biased towards brighter magnitudes, due to unresolved, overlapping sources boosting the inserted artificial star above the detection limit. These effects are somewhat more pronounced in the NIR, most likely due to the camera’s larger pixels and longer wavelengths producing lower resolution images (see Figure 6) and thus larger impacts due to crowding. No corrections for these biases have been made to the catalog. In the UV, the trend of increasing bias and uncertainty for fainter sources is similar to what is seen in the optical and the NIR. However, the variations with UV magnitude are largely independent of local source density, reflecting the lack of significant crowding except at the very highest density in F336W. Another notable difference is in the sign of the bias. Well before completeness begins to decline significantly, the bias begins to become substantial, but has the opposite sign as seen in the optical and NIR, such that measurements seem to be biased significantly faint. The effect appears most consistent with a slightly high background measurement, since the bias induced from high sky subtraction would be very small for bright sources, and increase for fainter source, as we see in the NUV photometry. A similar trend was seen in the W14 PHAT photometry study, and the speculation was that perhaps charge transfer efficiency (CTE) effects were causing the sky brightness to be overestimated. However, in this work we have used pre- flashed, CTE-corrected UVIS images, which should have reduced CTE effects on the sky brightness. Nonetheless, it is clear that our technique is likely attributing too much flux to sky in the NUV images. ## 3 Results Tables 5 and 6 provide samples of the photometry catalog and AST results from the survey. The catalog included here contains the positions, magnitudes, signal-to-noise ratio, and data quality flag for each detected star. The comprehensive, and much larger, catalog is available as a high-level science product (HLSP) in the Multimission archive via 10.17909/t9-ksyp-na40 (catalog 10.17909/t9-ksyp-na40). This comprehensive catalog includes the combined measurements of each star in each band, as well as in each of the individual measurements in all of the survey exposures, along with all of the measurement quality information reported by DOLPHOT (uncertainty, $\chi$, sharp, round, crowd, error flag). This catalog includes thousands of columns, and is hundreds of GB in size. The simplified AST results (with limited columns) in Table 6 are the location, input magnitude, output magnitude, output signal-to-noise, and output quality flag for each artificial star. The full catalog with all of the columns includes the input counts into each individual exposure, as well as all of the output photometry measurement columns as for the detected stars in the survey. As such, the HLSP catalog again is much larger and contains thousands of columns for those who would make use of the full AST input and output. ### 3.1 Color-magnitude Diagrams In Figure 8 we plot the entire catalog of detections in the optical bands, and we label the strongest features. The left panel shows all of the measurements, and the right panel shows the measurements that do not pass our quality metrics. This overview CMD shows the high fidelity of the photometry, which produces well-populated and clearly defined features. The high-definition of these features, described below, all suggest that a very large fraction of our photometry is reliable. On the blue edge, the vertical plume of the upper-main sequence (MS) is narrow and confined to a sharp edge determined by the saturation of color when the effective temperature of stars reaches hotter than $\sim$104 K. Slightly to the red of this, is a second, less populated blue plume that is the blue helium burning (BHeB) sequence. This sequence marks the bluest extremity of the loop that characterizes the core-helium-burning phase of stars of intermediate and high masses. Its continuous appearance suggests that M33 has been forming stars at a relatively high intensity for hundreds of Myr. The next bright plume to the red (brighter than F814W$\sim$20 and starting at F475W–F814W$\sim$2) is the red helium-burning (RHeB) sequence. This feature consists of massive stars in the initial stage of core-helium-burning, with convective envelopes, before the decrease in the central helium content that drives their move towards the blue BHeB. It also contains the stars at the very latest phases of core-helium-burning, that move to the red again as their He-exhausted cores contract and extended convection sets in their envelopes. In theory, this RHeB sequences extends down in the CMD until it merges with the red clump (RC) of low-mass core-helium-burning stars at F814W$\sim$25\. The width of the color gap between the RHeB and the BHeB is sensitive to the metallicity, as more metal rich stars will be redder during this phase. ### 3.2 Luminosity Functions While these initial qualitative evaluations of our photometry are promising, we now move into quantitative tests of the fidelity and consistency of standard CMD features to further assess the robustness and homogeneity of the catalog. Two features that are very well-suited to such quality checks are the tip of the RGB (TRGB) and the RC. By comparing the locations of these features in the luminosity function, as a function of position in the survey, we can ensure that any variations are smooth, and thus, most likely related to gradients in the stellar population demographics (e.g., age and metallicity). The left panel of Figure 24 shows the optical color-magnitude selection regions for the TRGB and the RC on a CMD of the entire survey. The upper right panel shows the F814W luminosity function, normalized by the total number of stars sampled, for stars in the color range $2.5<$F475W–F814W$<3.5$ at F814W = 20.7 and 20.3$<$F814W$<$21.3 at several locations in the survey, which should be dominated by the metal-poor RGB that has a TRGB absolute magnitude of F814W$\sim{-}4.05\pm 0.1$ (Beaton et al., 2018). We see that the function steepens at the TRGB, and that the TRGB at this color remains consistent to within $\sim$0.1 mag over the survey, showing that the amount of systematic uncertainty over large areas in our catalog is small. Furthermore, this TRGB magnitude is within the uncertainties of that expected for a foreground $A_{\rm{F814W}}$ of 0.063 (Schlafly & Finkbeiner, 2011) and a distance modulus of $24.67\pm 0.07$ (de Grijs et al., 2017), suggesting that our absolute photometric calibration is also accurate. In the lower right panel of Figure 24, we show the F814W luminosity function for stars in the color range 1$<$F475W-F814W$<$2 at F814W=25 and 23.5$<$F814W$<$25.5 at several locations to check the position of the RC. This can be compared to $M_{\rm{RC}}^{I}=-0.22\pm 0.03$ from Groenewegen (2008), which converts to $M_{\rm{RC}}^{I}=24.51$ at the distance and extinction of M33. The magnitude of the peak of the RC remains consistent to within 0.1 mag (24.51$\pm$0.10), confirming that even at much fainter fluxes, the systematic uncertainties over large areas are small. ### 3.3 Foreground and Background Contamination Our catalog has a small amount of contamination from Milky Way foreground stars and from background galaxies. To estimate the severity of the foreground contamination, we produced model Galactic populations using the Trilegal software package (Girardi et al., 2005). The model suggests $\sim$3400 foreground stars in our survey footprint with F160W$<$26 with $\sim$2200 of these having F475W$<$28\. Thus, our catalog of 22 million stars contains only $<$0.02% foreground contamination. However, in certain areas of color-magnitude space, it is important to be able to identify foreground features so that they are not confused with M33 populations. To aid in this recognition, Figure 25 provides CMDs of the foreground model on the same axes as our survey CMDs. These plots show the locations of features associated with the foreground populations. Mostly the foreground occupies the space between the BHeB and the RHeB, along with slightly contaminating the RGB and AGB. The only highly visible foreground feature is the narrow bright plume of stars at $\mathrm{F110W}-\mathrm{F160W}=0.7$ that is the shared color of virtually all of the foreground main sequence stars in the IR. Interestingly, M33 has a well-populated RHeB feature that is vertical at $\mathrm{F110W}-\mathrm{F160W}=0.9$. We have verified that the majority of stars in this feature are the same as those in the bright feature at $\mathrm{F475W}-\mathrm{F160W}\sim 5$, which has no significant foreground equivalent. Thus, not only are the RHeB stars separated from the foreground in IR color, the foreground contamination in our catalog appears to be less than expected. . ## 4 Conclusions We have produced a catalog of resolved stellar photometry for 22 million stars in the field of M33 from 54 HST pointings covering the inner 3.1$\times$4.6 kpc in 6 bands, including F275W, F336W, F475W, F814W, F110W, and F160W. The astrometry of this catalog is aligned to the Gaia DR2 astrometric solution to $\sim$5 milliarcsec. This catalog reaches $m_{\mathrm{F275W}}=24.5$, $m_{\mathrm{F336W}}=25$, $m_{\mathrm{F475W}}=28.5$, $m_{\mathrm{F814W}}=27.5$, $m_{\mathrm{F110W}}=26$, and $m_{\mathrm{F160W}}=25$ with a signal-to-noise limit of 4. Crowding causes the limiting magnitude to be brighter in the redder bands closer to the center of M33. This photometry will be studied in great detail by many future studies, such as the history of star formation in M33, the M33 star cluster population, the initial mass function of star clusters in M33, feedback between the stars and interstellar medium in M33, the dust content of M33, and many more. We have performed many quality checks of the photometry, including ensuring that the tip of the red giant branch is consistent with previous distance measurements of M33, as well as running suites of artificial star tests, where stars of known SEDs are put into the data and the analysis routine was rerun to assess the precision and completeness with which stars are recovered. A simplified version of our results catalogs are provided here, which will likely provide all of the information required for many science use cases; however, the exhaustive and complete output from our photometry measurements are available from the multimission archive HLSP. The code used to generate the tables and figures in this paper (with the exception of figures 1 and 2) is available at https://github.com/meredith- durbin/m33_survey_plots. Support for this work was provided by NASA through grant #GO-14610 from the Space Telescope Science Institute, which is operated by AURA, Inc., under NASA contract NAS 5-26555. This research has made use of “Aladin sky atlas” developed at CDS, Strasbourg Observatory, France, and of the NASA/IPAC Extragalactic Database (NED), which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. ## References * Anderson & Bedin (2010) Anderson, J., & Bedin, L. R. 2010, PASP, 122, 1035, doi: 10.1086/656399 * Anderson & Ryon (2018) Anderson, J., & Ryon, J. E. 2018, Improving the Pixel-Based CTE-correction Model for ACS/WFC, Instrument Science Report ACS 2018-04 * Astropy Collaboration et al. (2013) Astropy Collaboration, Robitaille, T. P., Tollerud, E. J., et al. 2013, Astronomy & Astrophysics, 558, A33, doi: 10.1051/0004-6361/201322068 * Astropy Collaboration et al. (2018) Astropy Collaboration, Price-Whelan, A. M., Sipőcz, B. M., et al. 2018, The Astronomical Journal, 156, 123, doi: 10.3847/1538-3881/aabc4f * Avila et al. (2015) Avila, R. J., Hack, W., Cara, M., et al. 2015, in Astronomical Society of the Pacific Conference Series, Vol. 495, Astronomical Data Analysis Software an Systems XXIV (ADASS XXIV), ed. A. R. Taylor & E. Rosolowsky (Astronomical Society of the Pacific), 281. https://arxiv.org/abs/1411.5605 * Bajaj (2017) Bajaj, V. 2017, Aligning HST Images to Gaia: A Faster Mosaicking Workflow, Space Telescope WFC3 Instrument Science Report * Barker et al. (2007a) Barker, M. K., Sarajedini, A., Geisler, D., Harding, P., & Schommer, R. 2007a, AJ, 133, 1138, doi: 10.1086/511186 * Barker et al. (2007b) —. 2007b, AJ, 133, 1125, doi: 10.1086/511185 * Beasley et al. (2015) Beasley, M. A., San Roman, I., Gallart, C., Sarajedini, A., & Aparicio, A. 2015, MNRAS, 451, 3400, doi: 10.1093/mnras/stv943 * Beaton et al. (2018) Beaton, R. L., Bono, G., Braga, V. F., et al. 2018, Space Sci. Rev., 214, 113, doi: 10.1007/s11214-018-0542-1 * Block et al. (2007) Block, D. L., Combes, F., Puerari, I., et al. 2007, A&A, 471, 467, doi: 10.1051/0004-6361:20065908 * Breddels & Veljanoski (2018a) Breddels, M. A., & Veljanoski, J. 2018a, VaeX: Visualization and eXploration of Out-of-Core DataFrames, 3.0.0. http://ascl.net/1810.004 * Breddels & Veljanoski (2018b) —. 2018b, Astronomy & Astrophysics, 618, A13, doi: 10.1051/0004-6361/201732493 * Bresolin et al. (2010) Bresolin, F., Stasińska, G., Vílchez, J. M., Simon, J. D., & Rosolowsky, E. 2010, MNRAS, 404, 1679, doi: 10.1111/j.1365-2966.2010.16409.x * Chandar et al. (1999) Chandar, R., Bianchi, L., Ford, H. C., & Salasnich, B. 1999, PASP, 111, 794, doi: 10.1086/316393 * Choi et al. (2016) Choi, J., Dotter, A., Conroy, C., et al. 2016, ApJ, 823, 102, doi: 10.3847/0004-637X/823/2/102 * Choudhury et al. (2015) Choudhury, S., Subramaniam, A., & Cole, A. A. 2015, Monthly Notices of the Royal Astronomical Society, 455, 1855, doi: 10.1093/mnras/stv2414 * Choudhury et al. (2018) Choudhury, S., Subramaniam, A., Cole, A. A., & Sohn, Y.-J. 2018, Monthly Notices of the Royal Astronomical Society, 475, 4279, doi: 10.1093/mnras/sty087 * Cioni et al. (2008) Cioni, M. R. L., Irwin, M., Ferguson, A. M. N., et al. 2008, A&A, 487, 131, doi: 10.1051/0004-6361:200809366 * Corbelli & Walterbos (2007) Corbelli, E., & Walterbos, R. A. M. 2007, ApJ, 669, 315, doi: 10.1086/521618 * Dalcanton et al. (2012) Dalcanton, J. J., Williams, B. F., Lang, D., et al. 2012, ApJS, 200, 18, doi: 10.1088/0067-0049/200/2/18 * Dalcanton et al. (2015) Dalcanton, J. J., Fouesneau, M., Hogg, D. W., et al. 2015, ApJ, 814, 3, doi: 10.1088/0004-637X/814/1/3 * Dask Development Team (2016) Dask Development Team. 2016, Dask: Library for Dynamic Task Scheduling, 1.0.0. https://dask.org * Davidge (2003) Davidge, T. J. 2003, AJ, 125, 3046, doi: 10.1086/375303 * de Grijs et al. (2017) de Grijs, R., Courbin, F., Martínez-Vázquez, C. E., et al. 2017, Space Sci. Rev., 212, 1743, doi: 10.1007/s11214-017-0395-z * De Paolis et al. (2016) De Paolis, F., Gurzadyan, V. G., Nucita, A. A., et al. 2016, A&A, 593, A57, doi: 10.1051/0004-6361/201628780 * Deul & van der Hulst (1987) Deul, E. R., & van der Hulst, J. M. 1987, A&AS, 67, 509 * Dolphin (2016) Dolphin, A. 2016, DOLPHOT: Stellar photometry. http://ascl.net/1608.013 * Dolphin (2016) Dolphin, A. 2016, DOLPHOT: Stellar Photometry, 2.0. http://ascl.net/1608.013 * Dolphin (2000) Dolphin, A. E. 2000, PASP, 112, 1383 * Dolphin (2000) Dolphin, A. E. 2000, Publications of the Astronomical Society of the Pacific, 112, 1383, doi: 10.1086/316630 * Dolphin (2002) Dolphin, A. E. 2002, MNRAS, 332, 91, doi: 10.1046/j.1365-8711.2002.05271.x * Druard et al. (2014) Druard, C., Braine, J., Schuster, K. F., et al. 2014, A&A, 567, A118, doi: 10.1051/0004-6361/201423682 * D’Souza & Bell (2018) D’Souza, R., & Bell, E. F. 2018, Nature Astronomy, 2, 737, doi: 10.1038/s41550-018-0533-x * Engargiola et al. (2003) Engargiola, G., Plambeck, R. L., Rosolowsky, E., & Blitz, L. 2003, ApJS, 149, 343, doi: 10.1086/379165 * Gaia Collaboration et al. (2018) Gaia Collaboration, Brown, A. G. A., Vallenari, A., et al. 2018, A&A, 616, A1, doi: 10.1051/0004-6361/201833051 * Gallart (1998) Gallart, C. 1998, ApJ, 495, L43, doi: 10.1086/311218 * Ginsburg et al. (2017) Ginsburg, A., Parikh, M., Woillez, J., et al. 2017, Astroquery: Access to Online Data Resources, 0.3.0. http://ascl.net/1708.004 * Ginsburg et al. (2019) Ginsburg, A., Sipőcz, B. M., Brasseur, C. E., et al. 2019, The Astronomical Journal, 157, 98, doi: 10.3847/1538-3881/aafc33 * Girardi et al. (2005) Girardi, L., Groenewegen, M. A. T., Hatziminaoglou, E., & da Costa, L. 2005, A&A, 436, 895, doi: 10.1051/0004-6361:20042352 * Gordon et al. (2016) Gordon, M. S., Humphreys, R. M., & Jones, T. J. 2016, ApJ, 825, 50, doi: 10.3847/0004-637X/825/1/50 * Gratier et al. (2010) Gratier, P., Braine, J., Rodriguez-Fernandez, N. J., et al. 2010, A&A, 522, A3, doi: 10.1051/0004-6361/201014441 * Gregersen et al. (2015) Gregersen, D., Seth, A. C., Williams, B. F., et al. 2015, AJ, 150, 189, doi: 10.1088/0004-6256/150/6/189 * Groenewegen (2008) Groenewegen, M. A. T. 2008, A&A, 488, 935, doi: 10.1051/0004-6361:200810201 * Hack et al. (2013) Hack, W. J., Dencheva, N., & Fruchter, A. S. 2013, in Astronomical Society of the Pacific Conference Series, Vol. 475, Astronomical Data Analysis Software and Systems XXII, ed. D. N. Friedel (Astronomical Society of the Pacific), 49 * Hammer et al. (2018) Hammer, F., Yang, Y. B., Wang, J. L., et al. 2018, MNRAS, 475, 2754, doi: 10.1093/mnras/stx3343 * Harris et al. (2020) Harris, C. R., Millman, K. J., van der Walt, S. J., et al. 2020, Nature, 585, 357, doi: 10.1038/s41586-020-2649-2 * Hermelo et al. (2016) Hermelo, I., Relaño, M., Lisenfeld, U., et al. 2016, A&A, 590, A56, doi: 10.1051/0004-6361/201525816 * Heyer et al. (2004) Heyer, M. H., Corbelli, E., Schneider, S. E., & Young, J. S. 2004, ApJ, 602, 723, doi: 10.1086/381196 * Hinz et al. (2004) Hinz, J. L., Rieke, G. H., Gordon, K. D., et al. 2004, ApJS, 154, 259, doi: 10.1086/422558 * Hoopes & Walterbos (2000) Hoopes, C. G., & Walterbos, R. A. M. 2000, ApJ, 541, 597, doi: 10.1086/309487 * Humphreys et al. (2017) Humphreys, R. M., Gordon, M. S., Martin, J. C., Weis, K., & Hahn, D. 2017, ApJ, 836, 64, doi: 10.3847/1538-4357/aa582e * Humphreys & Sandage (1980) Humphreys, R. M., & Sandage, A. 1980, ApJS, 44, 319, doi: 10.1086/190696 * Hunter (2007) Hunter, J. D. 2007, Computing in Science & Engineering, 9, 90, doi: 10.1109/MCSE.2007.55 * Johnson (2019) Johnson, L. C. 2019, in American Astronomical Society Meeting Abstracts, Vol. 233, American Astronomical Society Meeting Abstracts #233, 249.11 * Johnson et al. (2015) Johnson, L. C., Seth, A. C., Dalcanton, J. J., et al. 2015, ApJ, 802, 127, doi: 10.1088/0004-637X/802/2/127 * Jones et al. (2001) Jones, E., Oliphant, T., Peterson, P., et al. 2001, SciPy: Open Source Scientific Tools for Python. http://www.scipy.org/ * Kobulnicky & Fryer (2007) Kobulnicky, H. A., & Fryer, C. L. 2007, ApJ, 670, 747, doi: 10.1086/522073 * Koch et al. (2018) Koch, E. W., Rosolowsky, E. W., Lockman, F. J., et al. 2018, MNRAS, 479, 2505, doi: 10.1093/mnras/sty1674 * Kormendy & McClure (1993) Kormendy, J., & McClure, R. D. 1993, AJ, 105, 1793, doi: 10.1086/116555 * Kramer et al. (2010) Kramer, C., Buchbender, C., Xilouris, E. M., et al. 2010, A&A, 518, L67, doi: 10.1051/0004-6361/201014613 * Krist et al. (2011) Krist, J. E., Hook, R. N., & Stoehr, F. 2011, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 8127, Proc. SPIE, 81270J, doi: 10.1117/12.892762 * Kruijssen et al. (2019) Kruijssen, J. M. D., Pfeffer, J. L., Reina-Campos, M., Crain, R. A., & Bastian, N. 2019, MNRAS, 486, 3180, doi: 10.1093/mnras/sty1609 * Kwitter & Aller (1981) Kwitter, K. B., & Aller, L. H. 1981, MNRAS, 195, 939, doi: 10.1093/mnras/195.4.939 * Lewis et al. (2015) Lewis, A. R., Dolphin, A. E., Dalcanton, J. J., et al. 2015, ApJ, 805, 183, doi: 10.1088/0004-637X/805/2/183 * Lewis et al. (2017) Lewis, A. R., Simones, J. E., Johnson, B. D., et al. 2017, ApJ, 834, 70, doi: 10.3847/1538-4357/834/1/70 * Lin et al. (2017) Lin, Z., Hu, N., Kong, X., et al. 2017, ApJ, 842, 97, doi: 10.3847/1538-4357/aa6f14 * Madore et al. (1974) Madore, B. F., van den Bergh, S., & Rogstad, D. H. 1974, ApJ, 191, 317, doi: 10.1086/152970 * Magrini et al. (2007) Magrini, L., Corbelli, E., & Galli, D. 2007, A&A, 470, 843, doi: 10.1051/0004-6361:20077215 * Magrini et al. (2010) Magrini, L., Stanghellini, L., Corbelli, E., Galli, D., & Villaver, E. 2010, A&A, 512, A63, doi: 10.1051/0004-6361/200913564 * Magrini et al. (2009) Magrini, L., Stanghellini, L., & Villaver, E. 2009, ApJ, 696, 729, doi: 10.1088/0004-637X/696/1/729 * Massey et al. (1996) Massey, P., Bianchi, L., Hutchings, J. B., & Stecher, T. P. 1996, ApJ, 469, 629, doi: 10.1086/177811 * Massey et al. (2006) Massey, P., Olsen, K. A. G., Hodge, P. W., et al. 2006, AJ, 131, 2478, doi: 10.1086/503256 * McConnachie et al. (2010) McConnachie, A. W., Ferguson, A. M. N., Irwin, M. J., et al. 2010, ApJ, 723, 1038, doi: 10.1088/0004-637X/723/2/1038 * McConnachie et al. (2018) McConnachie, A. W., Ibata, R., Martin, N., et al. 2018, ApJ, 868, 55, doi: 10.3847/1538-4357/aae8e7 * McKinney (2010) McKinney, W. 2010, in Proceedings of the 9th Python in Science Conference, ed. S. van der Walt & Jarrod Millman, 51–56 * McKinney (2011) McKinney, W. 2011, Python for High Performance and Scientific Computing, 14 * McLean & Liu (1996) McLean, I. S., & Liu, T. 1996, ApJ, 456, 499, doi: 10.1086/176674 * McMonigal et al. (2016) McMonigal, B., Lewis, G. F., Brewer, B. J., et al. 2016, MNRAS, 461, 4374, doi: 10.1093/mnras/stw1657 * McQuinn et al. (2007) McQuinn, K. B. W., Woodward, C. E., Willner, S. P., et al. 2007, ApJ, 664, 850, doi: 10.1086/519068 * Mighell & Rich (1995) Mighell, K. J., & Rich, R. M. 1995, AJ, 110, 1649, doi: 10.1086/117638 * Minniti et al. (1993) Minniti, D., Olszewski, E. W., & Rieke, M. 1993, ApJ, 410, L79, doi: 10.1086/186884 * Mookerjea et al. (2016) Mookerjea, B., Israel, F., Kramer, C., et al. 2016, A&A, 586, A37, doi: 10.1051/0004-6361/201527366 * Mostoghiu et al. (2018) Mostoghiu, R., Di Cintio, A., Knebe, A., et al. 2018, MNRAS, 480, 4455, doi: 10.1093/mnras/sty2161 * Niu et al. (2020) Niu, H., Wang, J., & Fu, J. 2020, ApJ, 903, 93, doi: 10.3847/1538-4357/abb8d6 * Pedregosa et al. (2011) Pedregosa, F., Varoquaux, G., Gramfort, A., et al. 2011, Journal of Machine Learning Research, 12, 2825. http://dl.acm.org/citation.cfm?id=1953048.2078195 * Regan & Vogel (1994) Regan, M. W., & Vogel, S. N. 1994, ApJ, 434, 536, doi: 10.1086/174755 * Roberts (1899) Roberts, I. 1899, A Selection of Photographs of Stars, Star-Clusters and Nebulae, together with Records of Results obtained in the pursuit of Celestial Photography (Volume 2) (Cambridge University Press) * Robin et al. (2007) Robin, A. C., Rich, R. M., Aussel, H., et al. 2007, The Astrophysical Journal Supplement Series, 172, 545, doi: 10.1086/516600 * Rocklin (2015) Rocklin, M. 2015, in Proceedings of the 14th Python in Science Conference, ed. K. Huff & J. Bergstra, Austin, TX, 126–132, doi: 10.25080/Majora-7b98e3ed-013 * Rosolowsky et al. (2003) Rosolowsky, E., Engargiola, G., Plambeck, R., & Blitz, L. 2003, ApJ, 599, 258, doi: 10.1086/379166 * Rosolowsky et al. (2007) Rosolowsky, E., Keto, E., Matsushita, S., & Willner, S. P. 2007, ApJ, 661, 830, doi: 10.1086/516621 * Rosolowsky & Simon (2008) Rosolowsky, E., & Simon, J. D. 2008, ApJ, 675, 1213, doi: 10.1086/527407 * Sarajedini et al. (2000) Sarajedini, A., Geisler, D., Schommer, R., & Harding, P. 2000, AJ, 120, 2437, doi: 10.1086/316807 * Schlafly & Finkbeiner (2011) Schlafly, E. F., & Finkbeiner, D. P. 2011, ApJ, 737, 103, doi: 10.1088/0004-637X/737/2/103 * Smith et al. (2020) Smith, N., E Andrews, J., Moe, M., et al. 2020, Monthly Notices of the Royal Astronomical Society, 492, 5897–5915, doi: 10.1093/mnras/staa061 * Stephens & Frogel (2002) Stephens, A. W., & Frogel, J. A. 2002, AJ, 124, 2023, doi: 10.1086/342538 * STSCI Development Team (2012) STSCI Development Team. 2012, DrizzlePac: HST Image Software, 2.2.6. http://ascl.net/1212.011 * Telford et al. (2020) Telford, O. G., Dalcanton, J. J., Williams, B. F., et al. 2020, ApJ, 891, 32, doi: 10.3847/1538-4357/ab701c * Telford et al. (2019) Telford, O. G., Werk, J. K., Dalcanton, J. J., & Williams, B. F. 2019, ApJ, 877, 120, doi: 10.3847/1538-4357/ab1b3f * Thilker et al. (2005) Thilker, D. A., Hoopes, C. G., Bianchi, L., et al. 2005, ApJ, 619, L67, doi: 10.1086/424816 * Tibbs et al. (2018) Tibbs, C. T., Israel, F. P., Laureijs, R. J., et al. 2018, MNRAS, 477, 4968, doi: 10.1093/mnras/sty824 * Toribio San Cipriano et al. (2016) Toribio San Cipriano, L., García-Rojas, J., Esteban, C., Bresolin, F., & Peimbert, M. 2016, MNRAS, 458, 1866, doi: 10.1093/mnras/stw397 * Tüllmann et al. (2011) Tüllmann, R., Gaetz, T. J., Plucinsky, P. P., et al. 2011, ApJS, 193, 31, doi: 10.1088/0067-0049/193/2/31 * van der Kruit & Freeman (2011) van der Kruit, P. C., & Freeman, K. C. 2011, ARA&A, 49, 301, doi: 10.1146/annurev-astro-083109-153241 * van der Marel et al. (2019) van der Marel, R. P., Fardal, M. A., Sohn, S. T., et al. 2019, ApJ, 872, 24, doi: 10.3847/1538-4357/ab001b * van der Walt et al. (2011) van der Walt, S., Colbert, S. C., & Varoquaux, G. 2011, Computing in Science & Engineering, 13, 22, doi: 10.1109/MCSE.2011.37 * Verley et al. (2009) Verley, S., Corbelli, E., Giovanardi, C., & Hunt, L. K. 2009, A&A, 493, 453, doi: 10.1051/0004-6361:200810566 * Wainer et al. (2020) Wainer, T., Johnson, L., Torres-Villanueva, E., & Seth, A. 2020, in American Astronomical Society Meeting Abstracts, Vol. 235, American Astronomical Society Meeting Abstracts #235, 306.02 * Waskom et al. (2018) Waskom, M., Botvinnik, O., O’Kane, D., et al. 2018, Mwaskom/Seaborn: V0.9.0 (July 2018), Zenodo, doi: 10.5281/ZENODO.1313201 * Watkins et al. (2010) Watkins, L. L., Evans, N. W., & An, J. H. 2010, MNRAS, 406, 264, doi: 10.1111/j.1365-2966.2010.16708.x * Weisz et al. (2015) Weisz, D. R., Johnson, L. C., Foreman-Mackey, D., et al. 2015, ApJ, 806, 198, doi: 10.1088/0004-637X/806/2/198 * West et al. (2018) West, L. A., Lehmer, B. D., Wik, D., et al. 2018, ApJ, 869, 111, doi: 10.3847/1538-4357/aaec6b * White et al. (2019) White, R. L., Long, K. S., Becker, R. H., et al. 2019, ApJS, 241, 37, doi: 10.3847/1538-4365/ab0e89 * Williams et al. (2009) Williams, B. F., Dalcanton, J. J., Dolphin, A. E., Holtzman, J., & Sarajedini, A. 2009, ApJ, 695, L15, doi: 10.1088/0004-637X/695/1/L15 * Williams et al. (2014) Williams, B. F., Lang, D., Dalcanton, J. J., et al. 2014, ApJS, 215, 9, doi: 10.1088/0067-0049/215/1/9 * Williams et al. (2015) Williams, B. F., Wold, B., Haberl, F., et al. 2015, ApJS, 218, 9, doi: 10.1088/0067-0049/218/1/9 * Williams et al. (2017) Williams, B. F., Dolphin, A. E., Dalcanton, J. J., et al. 2017, ApJ, 846, 145, doi: 10.3847/1538-4357/aa862a * Wyse (2002) Wyse, R. F. G. 2002, in EAS Publications Series, Vol. 2, EAS Publications Series, ed. O. Bienayme & C. Turon, 295–304. https://arxiv.org/abs/astro-ph/0204190 * Xi et al. (2020) Xi, S.-Q., Zhang, H.-M., Liu, R.-Y., & Wang, X.-Y. 2020, arXiv e-prints, arXiv:2003.07830. https://arxiv.org/abs/2003.07830 * Xilouris et al. (2012) Xilouris, E. M., Tabatabaei, F. S., Boquien, M., et al. 2012, A&A, 543, A74, doi: 10.1051/0004-6361/201219291 Table 1: Sample exposure data for one field. A full machine-readable table is provided in the online journal. Target Name | R.A. (J2000) | Decl. (J2000) | Start Time | Exp. (s) | Inst. | Aperture | Filter | Orientation ---|---|---|---|---|---|---|---|--- M33-B01-F01-IR | $01^{\mathrm{h}}34^{\mathrm{m}}33^{\mathrm{s}}$ | $+30^{\circ}47{}^{\prime}57{}^{\prime\prime}$ | 2017-12-28 06:53:23 | 399.23 | WFC3 | IR-FIX | F160W | -80.3544 M33-B01-F01-IR | $01^{\mathrm{h}}34^{\mathrm{m}}33^{\mathrm{s}}$ | $+30^{\circ}47{}^{\prime}58{}^{\prime\prime}$ | 2017-12-28 07:01:04 | 699.23 | WFC3 | IR-FIX | F110W | -80.3527 M33-B01-F01-IR | $01^{\mathrm{h}}34^{\mathrm{m}}33^{\mathrm{s}}$ | $+30^{\circ}47{}^{\prime}58{}^{\prime\prime}$ | 2017-12-28 07:13:45 | 399.23 | WFC3 | IR-FIX | F160W | -80.3530 M33-B01-F01-IR | $01^{\mathrm{h}}34^{\mathrm{m}}33^{\mathrm{s}}$ | $+30^{\circ}47{}^{\prime}58{}^{\prime\prime}$ | 2017-12-28 07:22:28 | 399.23 | WFC3 | IR-FIX | F160W | -80.3567 M33-B01-F01-IR | $01^{\mathrm{h}}34^{\mathrm{m}}33^{\mathrm{s}}$ | $+30^{\circ}47{}^{\prime}58{}^{\prime\prime}$ | 2017-12-28 07:31:11 | 399.23 | WFC3 | IR-FIX | F160W | -80.3554 M33-B01-F01-UVIS | $01^{\mathrm{h}}34^{\mathrm{m}}33^{\mathrm{s}}$ | $+30^{\circ}47{}^{\prime}59{}^{\prime\prime}$ | 2017-12-28 05:20:02 | 550.00 | WFC3 | UVIS-CENTER | F336W | -80.1842 M33-B01-F01-UVIS | $01^{\mathrm{h}}34^{\mathrm{m}}33^{\mathrm{s}}$ | $+30^{\circ}47{}^{\prime}59{}^{\prime\prime}$ | 2017-12-28 05:31:49 | 350.00 | WFC3 | UVIS-CENTER | F275W | -80.1837 M33-B01-F01-UVIS | $01^{\mathrm{h}}34^{\mathrm{m}}34^{\mathrm{s}}$ | $+30^{\circ}47{}^{\prime}59{}^{\prime\prime}$ | 2017-12-28 05:40:19 | 700.00 | WFC3 | UVIS-CENTER | F336W | -80.1838 M33-B01-F01-UVIS | $01^{\mathrm{h}}34^{\mathrm{m}}34^{\mathrm{s}}$ | $+30^{\circ}47{}^{\prime}58{}^{\prime\prime}$ | 2017-12-28 05:54:37 | 540.00 | WFC3 | UVIS-CENTER | F275W | -80.1831 M33-B01-F01-WFC | $01^{\mathrm{h}}34^{\mathrm{m}}34^{\mathrm{s}}$ | $+30^{\circ}47{}^{\prime}51{}^{\prime\prime}$ | 2017-07-27 22:08:21 | 15.00 | ACS | WFC | F814W | -127.6122 M33-B01-F01-WFC | $01^{\mathrm{h}}34^{\mathrm{m}}34^{\mathrm{s}}$ | $+30^{\circ}47{}^{\prime}51{}^{\prime\prime}$ | 2017-07-27 22:18:26 | 350.00 | ACS | WFC | F814W | -127.6120 M33-B01-F01-WFC | $01^{\mathrm{h}}34^{\mathrm{m}}33^{\mathrm{s}}$ | $+30^{\circ}47{}^{\prime}51{}^{\prime\prime}$ | 2017-07-27 22:26:56 | 700.00 | ACS | WFC | F814W | -127.6124 M33-B01-F01-WFC | $01^{\mathrm{h}}34^{\mathrm{m}}33^{\mathrm{s}}$ | $+30^{\circ}47{}^{\prime}51{}^{\prime\prime}$ | 2017-07-27 22:41:14 | 430.00 | ACS | WFC | F814W | -127.6123 M33-B01-F01-WFC | $01^{\mathrm{h}}34^{\mathrm{m}}34^{\mathrm{s}}$ | $+30^{\circ}47{}^{\prime}51{}^{\prime\prime}$ | 2017-07-27 23:33:11 | 10.00 | ACS | WFC | F475W | -127.6115 M33-B01-F01-WFC | $01^{\mathrm{h}}34^{\mathrm{m}}34^{\mathrm{s}}$ | $+30^{\circ}47{}^{\prime}51{}^{\prime\prime}$ | 2017-07-27 23:40:10 | 600.00 | ACS | WFC | F475W | -127.6114 M33-B01-F01-WFC | $01^{\mathrm{h}}34^{\mathrm{m}}34^{\mathrm{s}}$ | $+30^{\circ}47{}^{\prime}51{}^{\prime\prime}$ | 2017-07-27 23:52:51 | 370.00 | ACS | WFC | F475W | -127.6116 M33-B01-F01-WFC | $01^{\mathrm{h}}34^{\mathrm{m}}34^{\mathrm{s}}$ | $+30^{\circ}47{}^{\prime}51{}^{\prime\prime}$ | 2017-07-28 00:01:39 | 360.00 | ACS | WFC | F475W | -127.6116 M33-B01-F01-WFC | $01^{\mathrm{h}}34^{\mathrm{m}}34^{\mathrm{s}}$ | $+30^{\circ}47{}^{\prime}51{}^{\prime\prime}$ | 2017-07-28 00:10:17 | 360.00 | ACS | WFC | F475W | -127.6114 Table 2: DOLPHOT parameters used for all photometry. Detector | Parameter | Value ---|---|--- IR | raper | 2 IR | rchi | 1.5 IR | rsky0 | 8 IR | rsky1 | 20 IR | rpsf | 10 UVIS | raper | 3 UVIS | rchi | 2.0 UVIS | rsky0 | 15 UVIS | rsky1 | 35 UVIS | rpsf | 10 WFC | raper | 3 WFC | rchi | 2.0 WFC | rsky0 | 15 WFC | rsky1 | 35 WFC | rpsf | 10 All | apsky | 15 25 All | UseWCS | 2 All | PSFPhot | 1 All | FitSky | 2 All | SkipSky | 2 All | SkySig | 2.25 All | SecondPass | 5 All | SearchMode | 1 All | SigFind | 3.0 All | SigFindMult | 0.85 All | SigFinal | 3.5 All | MaxIT | 25 All | NoiseMult | 0.10 All | FSat | 0.999 All | FlagMask | 4 All | ApCor | 1 All | Force1 | 1 All | Align | 2 All | aligntol | 4 All | alignstep | 2 WFC | ACSuseCTE | 0 UVIS/IR | WFC3useCTE | 0 All | Rotate | 1 All | RCentroid | 1 All | PosStep | 0.1 All | dPosMax | 2.5 All | RCombine | 1.415 All | SigPSF | 3.0 All | PSFres | 1 All | psfoff | 0.0 All | DiagPlotType | PNG All | CombineChi | 1 WFC | ACSpsfType | 0 IR | WFC3IRpsfType | 0 UVIS | WFC3UVISpsfType | 0 Table 3: 50% completeness limits by stellar density (stars/square arcsec). Density | F275W | F336W | F475W | F814W | F110W | F160W ---|---|---|---|---|---|--- 0 - 0.15 | 24.44 | 25.63 | 27.65 | 26.77 | 25.62 | 24.95 0.15 - 0.3 | 24.43 | 25.53 | 27.20 | 26.49 | 25.09 | 24.55 0.3 - 0.6 | 24.42 | 25.54 | 26.96 | 26.17 | 24.55 | 23.88 0.6 - 0.9 | 24.37 | 25.43 | 26.41 | 25.65 | 23.96 | 23.27 0.9+ | 24.14 | 25.10 | 25.75 | 25.23 | 23.56 | 22.91 Table 4: Sample photometric bias, AST-derived uncertainty, DOLPHOT-reported uncertainty, and AST/DOLPHOT uncertainty ratio by magnitude and stellar density (stars/square arcsec). A full machine-readable table is provided in the online journal. Density | Filter | Magnitude | Bias | Uncertainty | DOLPHOT | Ratio ---|---|---|---|---|---|--- 0 - 0.15 | F275W | 17.5 | -0.003858 | 0.004518 | 0.003984 | 1.134049 0 - 0.15 | F275W | 18.0 | -0.001987 | 0.005048 | 0.004971 | 1.015534 0 - 0.15 | F275W | 18.5 | -0.000040 | 0.006076 | 0.005993 | 1.013889 0 - 0.15 | F275W | 19.0 | 0.002106 | 0.007913 | 0.007987 | 0.990811 0 - 0.15 | F275W | 19.5 | 0.006055 | 0.010892 | 0.009997 | 1.089572 0 - 0.15 | F275W | 20.0 | 0.009879 | 0.014001 | 0.012977 | 1.078916 0 - 0.15 | F275W | 20.5 | 0.015065 | 0.017544 | 0.015974 | 1.098268 0 - 0.15 | F275W | 21.0 | 0.023059 | 0.023446 | 0.021009 | 1.115996 0 - 0.15 | F275W | 21.5 | 0.034013 | 0.033485 | 0.028010 | 1.195487 0 - 0.15 | F275W | 22.0 | 0.043005 | 0.045021 | 0.039973 | 1.126294 0 - 0.15 | F275W | 22.5 | 0.064932 | 0.064028 | 0.055000 | 1.164144 0 - 0.15 | F275W | 23.0 | 0.088065 | 0.092049 | 0.076996 | 1.195507 0 - 0.15 | F275W | 23.5 | 0.137023 | 0.131494 | 0.114016 | 1.153292 0 - 0.15 | F275W | 24.0 | 0.167010 | 0.171251 | 0.160019 | 1.070193 0 - 0.15 | F275W | 24.5 | 0.142983 | 0.209921 | 0.207011 | 1.014055 0 - 0.15 | F275W | 25.0 | -0.050442 | 0.239546 | 0.234015 | 1.023635 Table 5: Sample photometric data. A full machine-readable table is provided in the online journal. R.A. (J2000) | Decl. (J2000) | F275W | S/N | GST | F336W | S/N | GST | F475W | S/N | GST | F814W | S/N | GST | F110W | S/N | GST | F160W | S/N | GST ---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|--- 23.340786 | 30.523476 | 29.160 | 0.1 | F | 26.679 | 2.1 | F | 26.791 | 10.4 | T | 26.325 | 7.2 | T | 99.999 | 0.0 | F | 99.999 | 0.0 | F 23.340794 | 30.523207 | 23.059 | 11.2 | T | 23.230 | 15.4 | T | 24.244 | 48.6 | T | 24.062 | 38.2 | T | 99.999 | 0.0 | F | 99.999 | 0.0 | F 23.340808 | 30.523388 | 99.999 | -2.2 | F | 99.999 | -1.5 | F | 26.613 | 11.9 | T | 24.796 | 26.3 | T | 99.999 | 0.0 | F | 99.999 | 0.0 | F 23.340814 | 30.523616 | 99.999 | -0.5 | F | 99.999 | -0.0 | F | 28.516 | 2.5 | F | 27.409 | 3.0 | F | 99.999 | 0.0 | F | 99.999 | 0.0 | F 23.340824 | 30.523322 | 25.680 | 2.2 | F | 27.131 | 1.5 | F | 28.579 | 2.1 | F | 27.189 | 3.4 | F | 99.999 | 0.0 | F | 99.999 | 0.0 | F 23.340825 | 30.523415 | 25.555 | 2.1 | F | 28.072 | 0.7 | F | 28.599 | 2.3 | F | 27.096 | 3.5 | F | 99.999 | 0.0 | F | 99.999 | 0.0 | F 23.340839 | 30.523445 | 99.999 | -0.4 | F | 99.999 | -0.6 | F | 28.191 | 3.2 | F | 27.260 | 3.3 | F | 99.999 | 0.0 | F | 99.999 | 0.0 | F 23.340845 | 30.523811 | 99.999 | -1.7 | F | 28.635 | 0.4 | F | 27.973 | 3.9 | F | 26.840 | 4.8 | T | 99.999 | 0.0 | F | 99.999 | 0.0 | F 23.340846 | 30.523475 | 99.999 | -0.7 | F | 99.999 | -0.9 | F | 99.999 | -0.1 | F | 27.709 | 2.0 | F | 99.999 | 0.0 | F | 99.999 | 0.0 | F Table 6: Sample artificial star data. A full machine-readable table is provided in the online journal. RA(J2000) | Dec(J2000) | F275W | Out-in | S/N | GST | F336W | Out-in | S/N | GST | F475W | Out-in | S/N | GST | F814W | Out-in | S/N | GST | F110W | Out-in | S/N | GST | F160W | Out-in | S/N | GST ---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|--- 23.436616 | 30.647496 | 29.515 | -1.793 | 0.4 | F | 27.248 | 0.037 | 1.7 | F | 26.114 | 0.202 | 16.0 | T | 24.538 | 0.223 | 22.8 | T | 24.006 | 0.577 | 3.4 | F | 23.406 | 0.377 | 10.3 | T 23.436644 | 30.647382 | 32.387 | 99.999 | 0.0 | F | 30.107 | 99.999 | 0.0 | F | 29.138 | 99.999 | 0.0 | F | 27.682 | 99.999 | 0.0 | F | 27.229 | 99.999 | 0.0 | F | 26.719 | 99.999 | 0.0 | F 23.436657 | 30.647784 | 31.890 | 99.999 | 0.0 | F | 30.294 | 99.999 | 0.0 | F | 29.845 | 99.999 | 0.0 | F | 28.666 | 99.999 | 0.0 | F | 28.315 | 99.999 | 0.0 | F | 27.893 | 99.999 | 0.0 | F 23.436664 | 30.647719 | 27.948 | 99.999 | 0.0 | F | 27.244 | 99.999 | 0.0 | F | 27.176 | 99.999 | 0.0 | F | 26.664 | 99.999 | 0.0 | F | 26.540 | 99.999 | 0.0 | F | 26.373 | 99.999 | 0.0 | F 23.436685 | 30.647769 | 24.387 | 0.413 | 4.7 | T | 24.706 | -0.100 | 10.9 | T | 25.494 | 0.107 | 31.9 | T | 25.688 | 0.362 | 8.1 | T | 25.826 | 99.999 | -0.9 | F | 25.886 | 99.999 | -0.4 | F 23.436758 | 30.647418 | 21.550 | 0.047 | 48.4 | T | 22.045 | 0.024 | 73.0 | T | 23.471 | 0.010 | 137.1 | T | 23.727 | 0.027 | 53.8 | T | 23.943 | 0.149 | 9.3 | T | 24.036 | 1.652 | 1.8 | F 23.436774 | 30.647546 | 27.575 | -1.026 | 1.3 | F | 26.345 | 0.319 | 2.8 | F | 26.029 | 0.141 | 19.3 | T | 25.247 | -0.004 | 17.7 | T | 25.043 | -0.292 | 5.9 | T | 24.795 | 1.111 | 1.3 | F 23.436785 | 30.647818 | 32.132 | 99.999 | 0.0 | F | 30.002 | 99.999 | 0.0 | F | 29.102 | 99.999 | 0.0 | F | 27.633 | 99.999 | 0.0 | F | 27.156 | 99.999 | 0.0 | F | 26.607 | 99.999 | 0.0 | F 23.436838 | 30.648715 | 22.383 | -0.011 | 31.9 | T | 22.401 | 0.031 | 58.7 | T | 22.796 | -0.009 | 197.2 | T | 22.919 | -0.023 | 114.8 | T | 23.000 | -0.014 | 31.8 | T | 23.043 | -0.143 | 22.3 | T 23.436847 | 30.648508 | 21.232 | 0.030 | 55.6 | T | 21.679 | 0.013 | 88.2 | T | 23.026 | -0.008 | 192.8 | T | 23.246 | -0.008 | 87.6 | T | 23.441 | 0.148 | 12.8 | T | 23.524 | 0.334 | 6.7 | T Figure 1: (Left) The relative distribution of star formation rate per unit area (derived from GALEX FUV+24$\mu$m images) for the M33 (blue) and M31 (red) survey areas. The new M33 observations have a significantly higher average SFR intensity than M31. (Right) The approximate present day metallicity gradient of M33 (blue) and M31 (red), and the range of metallicities in the Magellanic Clouds (Gregersen et al., 2015; Kwitter & Aller, 1981; Choudhury et al., 2015, 2018). The solid line shows the extent of the new HST observations. The shaded region shows that the metallicity covered by M33 spans the gap in metallicity between the Large Magellanic Cloud and the M31’s outer disk. Figure 2: Locations of the three M33 HST “bricks” (blue) compared to the FUV (background greyscale image, tracing unobscured star formation), Chandra (black outline, allowing detection of X-ray point sources), CO observations (red contours), and Herschel FIR spectroscopy from the HerM33s survey (green) (Rosolowsky et al., 2007; Kramer et al., 2010; Xilouris et al., 2012; Mookerjea et al., 2016) Figure 3: The WFC3/IR footprints of our M33 survey are plotted on a Sloan Digital Sky Survey (SDSS) image of M33. Brick 1 is marked by the 6$\times$3 set of footprints surrounding the galaxy center. Brick 1 is the northernmost 6$\times$3 footprints and Brick 3 is the southernmost set. Figure 4: Top left: The exposure map of the entire survey for all 3 cameras (WFC3/IR, WFC3/UVIS, and ACS/WFC). The grayscale is the amount of total exposure in each location in seconds. Top right: Exposure map for ACS/WFC only. Bottom left: The same for WFC3/UVIS only. Bottom right: The same for WFC3/IR only. The WFC3/IR footprints are the same as those shown in Figure 3. Figure 5: $X$ and $Y$ residual RMS values from TweakReg in milliarcseconds. The residual RMS values peak near 3 mas for ACS/WFC and WFC3/UVIS, and near 7 mas for WFC3/IR on both axes. Figure 6: Comparisons of individual (left) and stacked (right) exposures in F475W (top) and F160W (bottom). The individual exposures have not been corrected for geometric distortion, which leads to the slightly different astrometry between the panels on the left. Figure 7: Left: A map of the stellar density of the photometry catalog as determined by star counts per square arcsec with $19.7{<}\mathrm{F160W}{<}20.7$. Right: Same, with the colormap binned in increments of 0.15 stars/square arcsec. Black boxes mark the areas in which we have artificial star tests (ASTs) for determining the photometric quality (scatter, bias, and completeness) as a function of stellar density. Figure 8: Left: optical (F475W–F814W) Hess diagram of all output photometry (phot.fits). Right: same as left, showing measurements that fail our GST quality criteria in both bands. The failing measurements do not tend to mark typical CMD features, suggesting they are not reliable for population work. Figure 9: UV color–magnitude diagram for all F275W and F336W measurements in the survey. Left: fraction of measurements flagged as not passing our quality cuts in either of the two bands. Right: CMD produced showing only measurements that pass our quality cuts in both bands. Our quality cuts keep a very high fraction of the stars in the CMD features and a very low fraction of stars outside of these features. Figure 10: Same as Figure 9, but showing all F336W and F475W measurements. Figure 11: Same as Figure 9, but for all F475W and F814W measurements. Here, we split the measurements up by stellar density to show the effects of crowding. The stellar density range included in each CMD is marked in the upper-right corner of each panel in units of stars per arcsec2. These bands are strongly affected by crowding, as apparent by the brighter magnitude limit at the higher stellar densities. Figure 12: Same as Figure 11, but for the F475W and F160W measurements. Figure 13: Same as Figure 11, but for the F110W and F160W measurements. Figure 14: UV (left), optical (center), and IR (right) CMDs of the lowest density bin, with the colorbar showing the mean number of bandpasses in which a star passes the GST criteria. Nearly every detection in the NUV is detected in all 6 bands. Most RGB and AGB stars are detected in 4 bands, and only the faintest optical stars are limited to 2 bands, highlighting the depth of the ACS data. Figure 15: UV-optical-IR CMDs of artificial star inputs. Figure 16: Magnitudes at which we measure 50% completeness by stellar density in all filters. Completeness limits in the UV are largely consistent over the full density range of the survey, whereas they grow brighter with increasing density in the optical and NIR due to crowding. Figure 17: Photometric completeness (fraction of input stars that pass quality cuts) as a function of input magnitude in all filters for five characteristic density bins (labeled in upper right corners,). The shaded regions show 95% confidence using the Jeffreys interval. Figure 18: Photometric bias (thin solid lines) and $\pm 1\sigma$ uncertainty ranges (thick faded lines) derived from ASTs as a function of input magnitude in each filter for five density bins, . The bias is taken to be the median of the measured minus input AST magnitudes in half-magnitude bins, and the uncertainty bounds are the 16th and 84th percentiles of the same. Darker line colors correspond to higher densities. Figure 19: A comparison of optical CMDs of M33 with photometry from LGGS (left, $VI$), PAndAS (center, $gi$), and this work (right, F475W/F814W GST). Although the filter systems are not identical, we use a common color and magnitude range on all axes to illustrate the difference in depth that can be achieved with HST. The LGGS and PAndAS catalogs have been culled to cover approximately the same area as the HST survey, but have not been culled on any photometric quality metrics. The median color and magnitude uncertainties in 1-mag bins are shown on the right side of each panel (black lines). Figure 20: Top row: optical CMDs of three young stellar clusters, labeled with coordinates and radii. Bottom row: corresponding $8\times 8\arcsec$ F475W drc cutouts, with clusters encircled at the appropriate radii. Figure 21: CMDs showing selection regions for three stellar subpopulations of different characteristic ages. Left: F475W–F814W vs. F814W for stars meeting the GST criteria in the optical. The magenta polygon shows the selection region for young main sequence stars. Right: F110W–F160W color vs. reddening-free F160W magnitude ($q_{\mathrm{F160W}}$, as defined in Dalcanton et al., 2015) for stars that meet the GST criteria in the IR, but do not pass the GST criteria in F275W. The UV constraint helps to eliminate contamination from young BHeB stars in the selection of older populations. The orange polygon shows the selection region for asymptotic giant branch stars, and the blue shows the same for the red giant branch. The limiting magnitudes of the RGB and MS selection regions roughly correspond to $\gtrsim$80% completeness in the relevant bands (see Figure 16). As completeness in F160W varies substantially with stellar density, RGB stars are selected using two different criteria. Stars located more than $1.2\arcmin$ from the M33 nucleus ($01^{\mathrm{h}}33^{\mathrm{m}}51^{\mathrm{s}}$, $+30^{\circ}39\arcmin 36.72\arcsec$; van der Marel et al. 2019 and references therein) are selected with $q_{\rm F160W}<22$, while stars at radii $<1.2\arcmin$ are selected with $q_{\rm F160W}<21$ (dashed line). This radial cut roughly corresponds to a stellar surface density cut at 0.6 stars per square arcsec as measured in Fig. 7. Figure 22: Stellar density maps of three different subpopulations: old RGB stars (left), intermediate-age AGB stars (center), and young MS stars (right). The selection criteria for these subpopulations are shown in Figure 21. For the RGB, star counts in the inner $1.2\arcmin$ have been scaled to correspond to the deeper selection at larger radii. The RGB and MS maps have been smoothed with a Gaussian kernel with $\sigma=0.25\arcmin$, and the AGB with $\sigma=0.5\arcmin$. Figure 23: Left: Median-filtered spatial map of the ratio of AGB to upper RGB stars (see Figure 21). In this case, RGB stars were selected across the entire field with $q_{\mathrm{F160W}}<21$ (as opposed to the dual selection shown in Figures 21 and 22) to eliminate varying completeness as a source of uncertainty in the ratio. The large dotted ellipses show one of the radial annuli used for the averaging in the right panel (inclination, position angle, and central coordinates from van der Marel et al. 2019 and references therein), while the small dashed ellipse shows the approximate orientation, axis ratio, and maximum scale of M33’s weak central bar (Corbelli & Walterbos, 2007). Right: Average AGB/RGB ratio as a function of deprojected distance from the M33 nucleus in kpc. The estimated maximum scale of M33’s weak bar is shown for reference ($\sim$559 pc or $2.24\arcmin$). In both the map and radial profile, there is a distinct enhancement of AGB populations in M33’s outskirts relative to the center. This supports previous work (e.g., Davidge, 2003; Block et al., 2007; Verley et al., 2009) as evidence of M33’s “inside-out” star-formation history (Williams et al., 2009; Mostoghiu et al., 2018). Figure 24: Left: optical CMD showing the selection regions used to measure the F814W magnitude functions for the tip of the red giant branch and red clump features. Right: Normalized F814W luminosity functions for the TRGB (top) and red clump (bottom) for 12 $\sim\\!6^{\prime}\times$6′ regions of the survey, with lines weighted by the number of stars per sample. We predict apparent TRGB and RC magnitudes using $M^{I}_{\mathrm{TRGB}}=-4.05$ (Beaton et al., 2018) and $M_{\rm{RC}}^{I}=-0.22$ (Groenewegen, 2008), with a distance modulus of $m-M=24.67$ (de Grijs et al., 2017) and foreground extinction $A_{\mathrm{F814W}}=0.063$ (Schlafly & Finkbeiner, 2011). Note the consistency of changes in the magnitude distributions with the predicted TRGB and RC across the entire survey. Figure 25: CMDs for the $\sim$5000 foreground stars (black scatter points) predicted by the Trilegal Galactic model for this region of the sky at the depth of our survey, overlaid on the GST CMDs. While the densities of points are not comparable because the GST CMDs are 2-D histograms, the locations of the foreground stars relative to M33 CMD features is easier to see on the overlay.
16k
arxiv_papers